Graeff’s experiments and 2LoD: Replication and Implications

Posted: June 28, 2012 by tchannon in atmosphere, Gravity, methodology

This is part four of a four part guest post by ‘Lucy Skywalker’.

Graeff’s experiments and 2LoD: Replication and Implications

Lucy Skywalker recaps: In Part One I described my visit to Graeff’s seminar. In Part Two I described some of his experiments in detail. In Part Three I showed how he developed the backing theory. Finally in Part Four I now consider the implications of this work, and plans for replicating the experiments. Replication is of crucial importance both to Climate Science in particular, and Science in general; without it, no theory is sacrosanct.


Here is a replication of one of Graeff’s experiments, assembled by him and ready to go. This particular experiment seems to be simpler in its results than the experiments we’ve looked at. But first, I want to think about implications of his work, to gauge where we want to pitch in.


Graeff has demonstrated that a modification of the full statement for the Second Law is needed, and that this is possible without contravening the essence of the Second Law. One place where his modification is clearly of importance is Climate Science.


Atmospheric temperature drops on average about 7°C for every kilometre altitude gained, with variations depending on humidity and other factors. This is the adiabatic lapse rate which is very familiar to meteorologists and airplane pilots. Yet there is no theoretical basis that includes the molecular effect of gravity in the way Graeff shows must be at work – as far as I am aware. Now we suddenly have a very beautiful, very simple and very satisfying explanation for the adiabatic lapse rate, namely gravity on individual molecules, OFFSET more or less by convection to produce a dynamic and somewhat unstable equilibrium.

The following is my fuller explanation, which I hope others can improve:

By gravity, gas molecules fall and gain kinetic energy which is warmth. But this increased kinetic energy causes them to repel each other more actively and the gas will either expand-and-rise, or warm. We have a continual balancing-act between what individual molecules do in microscopic response to gravity (fall and get warmer) and what groups of free molecules can do all together – ie convect (expand and rise as a parcel, gaining momentum as wind but cooling due to both expansion and loss of gravitational kinetic energy). The very existence of the adiabatic lapse rate (a.l.r.) strongly suggests the presence of a gravitational temperature gradient T(Gr). It appears that T(Gr) (-0.07K/m = 70°C/km) is about nine-tenths offset by convection in the free atmosphere to produced the familiar adiabatic lapse rate of around 7°C/km – and a habitable planet. This is unfamiliar, so it feels tricky at first. One has to imagine single molecules under gravity, and at the same time, parcels of molecules able to have a net collective action ie convection. In the free atmosphere, convection nearly overcomes T(Gr), but not quite, and the a.l.r. is the result. But in the far denser oceans, convection wins over T(Gr) so cool water being heavier sinks. If there were convection impedance, warmth would increase with depth – as happens in the solid earth.The convection needed to balance T(Gr) in air is scarcely noticed on this planet. But the sun shining through clear air to warm Earth’s surfaces creates noticeable convection currents which, again, undo most of the warming.The true greenhouse effect occurs in greenhouses where convection is impeded.


I suspect that early meteorologists may have understood or intuited all this – it is not far-removed from common-sense. There is indeed sound evidence for the “greenhouse gas” properties of substances like CO2, O3 and CH4: we can see the evidence for ozone in the diagram above – but the ozone effect at least seems to be insitu, not projected down. Yet the IPCC claim a mere 33-degree greenhouse effect. This contradicts common-sense and the ozone effect, and when we look at the 70 degrees temperature difference between Earth’s tropopause and Earth’s surface; it is even further from explaining the 100-odd degrees temperature difference between the lunar surface temperature (measured now by Diviner) and the Earth surface temperature.

The gravity effect, as theorized by Graeff, can explain this large difference with no trouble at all (gravitational temp. gradient, minus local convection, equals local adiabatic lapse rate). It can explain the warmth in places below sea level, and down deep mines. Graeff’s theory also makes sense of the violent atmosphere on Jupiter and the very high temperatures of Venus’ surface and the interior of Jupiter. It starts to explain why our own planet is extremely hot at the core yet temperate at the surface. I have found nothing in the solar system that Graeff’s theory does not start to explain.

Graeff’s theory remedies a huge missing link.


People have strong reactions to challenges to familiar ways of thinking. Some people will go to great lengths to stay in denial of evidence, rather than look at something that requires them to consider changing their thinking and their belief system, maybe risk losing grants, status, obsessions, or credibility. What an embarrassment if one’s lifelong “expertise” might not look so great any more. We’ve seen the whole of orthodox Climate Science close ranks and do this to skeptics, in spades.

In Graeff’s case, it is not only “warmists” who may have such a reaction. It is “climate skeptics” as well. Graeff’s theory not only challenges one of the most sacrosanct of all the laws of science, it also gives strong credence to other challengers like Nikolov and Zeller.

OK, we have to ask questions. Surely Graeff’s challenge would have been seen and accepted at the time of Maxwell, if it was good science?? Actually there was one good scientist, Loschmidt, who did dispute Maxwell. The really extraordinary thing is that until Graeff, nobody had checked Loschmidt’s challenge to Maxwell’s belief (and Gibbs’ mathematical theory) by practical experimentation.


Clearly we have to do very thorough checking, with replication of experiments. But we can save ourselves anxiety by remembering that only a few years back, Graeff’s experiments at laboratory scale would have yielded temperatures too small and too fluctuating to measure. We didn’t have suitable materials, or precision thermocouples and thermistors, or the computer power to record long sequences. But Graeff has developed sufficient methodology, and has shown what accuracy and consistency to expect, and how to wring statistically significant results out of fluctuating data, so that we can experiment and get valid results. The graphs below are a reminder from his water experiment. They show (1) how he wrung significance from the subtle effect of gradient fluctuations (thermocouples n 1-8, fine scale to LHS) lagging temperature rates of change (thermistors 9-14 – larger scale to RHS),


and how he used those temperature gradient fluctuations (thermocouples) plotted against extermal temperature changes (thermistors) to obtain a very exact reading for the temperature gradient when the external conditions are not changing (point where change per hour = zero).

NB: the thermistor readings above, while clearly accurate in showing change (they clearly move in step) are not so accurate in absolute terms. Therefore it is neither clear nor necessary to know which lines represent which, between 9 and 14.


We will still have to deal with challenges rooted in emotional denial. We can expect a lot of repetition and a great show of apparently relevant knowledge that may be valid as claimed, though it is more likely to be straw men etc. This can be tiring and depressing. Scientist have even committed suicide following such responses, even when their work has been correct and accepted years, decades or even centuries later.

But the other side of the challenges is eventually a very strong certainty. There is no way Science can escape dealing with emotional reactions. Rather, it is surely far better scientific practice to acknowledge human nature, and by acknowledgement, develop ways to deal with it. This is a big challenge to Science today – how to handle so-called “subjective” factors that Science rightly excluded early on, that are now kicking back so hard as to make corrupt nonsense of whole branches of Science.

At the simplest level, we can deal with this issue by simply becoming aware of what’s happening, without judgement, and preferably with compassion, whenever we see human frailty. Blogs can be brilliant places for developing this higher part of our human nature. Some awaken more quickly than others. But we are all on the journey.

At the end of all this, there is the excitement of real scientific discovery. Here is something important, right under our noses.



I want to see Graeff’s work replicated in ways people can trust. I believe this is crucial at all levels. Without basic replication, people will reasonably say that Graeff could be mistaken (though I personally have no doubts). Replication is thus the key to:

  • Put right a flaw in a very basic and very important law of physics
  • Accept that even the Second Law can still have amazingly “obvious” flaws
  • Open up possibilities regarding alternative energy production – possibilities that may have been prematurely excluded
  • Help restore Climate Science to a proper scientific footing – and stimulate reform therein
  • Open the way to a more holistic approach to Science altogether – we ignore “subjective” factors at our peril.

My latest news is that Graeff is actually working right now with a professor at an American University who is hoping to replicate his experiments. This is the first time anyone has actually chosen to replicate him, and collaborate. Professor Sheehan has certainly taken a warm and perceptive interest, but has not undertaken replication. A Japanese professor set up an experiment, but using a centrifuge without enough controls to be able to measure a true gravitational effect as Graeff has done. Lack of replication is not for want of Graeff trying to interest the universities that should be doing this work…

I regard this US development as of crucial importance. If Graeff’s work is now going to be replicated in proper fashion, and written up so as to be able to achieve at least the first two points above, I shall feel that my involvement in direct replication is no longer needed. However, we are not yet at this point. And I do not know how many of my other points the professor may be open to. So there may still be work for us to do, particularly to help this work to be grasped and honoured by Climate Science.

If nothing comes of the US development, then I will still need to start with replication – in which case, the questions are:

  • Building a team – who wants to become involved? How can we build a good team? How about a trip to Graeff for this? Etc… I am willing to be a first contact point.
  • Location – should we start where I live and where there are already many others interested in parts of science that orthodoxy will not examine? Does Tallbloke want to take on this project where he lives? Can we find a friendly college laboratory? Etc…
  • Details: basic materials, sensitive equipment, skills, premises? How to find somewhere that maintains a constant temperature like Graeff’s thermostatically-controlled cellar? Do we simply collect scrap insulation / metal sleeves / etc or do we have money to go for “the best”? How much hands-on knowledge do we need? Can someone handle datalogger setup, calibration, and conversion to Excel on a PC? What can Graeff advise out of a lifetime of engineering? Etc…
  • Communication: getting the word out to the professionals who should be doing this work – do we want to aim to produce a paper for peer-review? should we include Sheehan and Graeff and others? at what point do we step back with “mission accomplished” ?

I am hoping to visit Graeff again this year, perhaps with a small group of people who are interested like myself. I think that such a visit would make a huge difference. And it may be now or never, since he is 84 – and it would be a great loss to all of us if we miss this opportunity.

Prepared for the web from documents supplied by Lucy Skywalker.


  1. br1 says:

    ferd berple:
    “What about the gas left inside the scuba tank? Why does it cool when the tank valve is opened?”

    Nice question! And thinking about it some more I think you have a point and my previous ‘surface only’ answer was not complete.

    The way I see it now is that (as well as the surface effect) there will be an effect like evaporative cooling. In this effect, the faster molecules make it into the new space sooner than the slower molecules. This sets up a transient where (when the JT coefficient is zero and the gas is expanding into a fixed volume) the gas in the new space will have a higher temperature and the gas remaining in the old space will have a lower temperature. This is a bit more like a volume effect, as the faster molecules can come from deeper in the gas. After a while, when the expansion is done, the two parts mix and the whole gas recovers the same temperature it started with. If the gas expands into an unlimited space, this recovery second stage mixing doesn’t happen, and the gas left behind will haved cooled slightly. If the JT coefficient is positive, then the gas expanding into the new space will also cool down, and the system can never recover it’s original temperature.

    The reason you don’t see this ‘evaporative cooling’ effect in the sim is, I reckon, due to the horizontal averaging of temperature. If you expand to the right then the transient temperature difference will be in vertical bands, the temperature on the right will be hot for a while and the temperature on the left will he cool. If you change the KE binning to show multiple bins horizontally you should be able to see this transient. Then the two sides should mix and the temperature will become uniform again. What do you reckon?

    Thanks for staying with this question!

  2. ferd berple says:

    After a while, when the expansion is done, the two parts mix and the whole gas recovers the same temperature it started with.
    OK, this makes sense to me.

    I’ve re-written the code to match the matlab sim. It also includes buttons to expand and contract the container walls/ceiling/floor. The container preserves total E on contraction/expansion.

    It was a brute force code conversion and I haven’t optimized it yet. Limited to 200 mols. Runs slowly if the mols are increased much beyond this.

    The matlab sim doesn’t appear to be as limited in the number of mols, which makes me hope there is room to optimize. I’ll need to have a detailed look at the algorithm you use to recalc the min time list.

    I recalc it fully after each event, as it seems possible that a collision could cascade, invalidating previous calculations, resulting in spurious collisions on the list.

    The mols in the new version show up as streaks as the sim speed increases, which are a trace of the mol path since the last display refresh.

    Anyhow this new sim might be of interests to try out moving the walls/celing/floor.

  3. ferd berple says:

    OK, I can confirm it is the hotter molecules that move to fill the void when you expand the container on the right. my sim uses the color coding for each molecule and it is obvious.

    If I contract the container by moving the right wall to the left, the temp is unchanged, but the pressure goes way up. Then when I move the wall back, the pressure drops, and the hot molecules are the ones that move to fill the void.

    Moving the floor up without changing total E reduces the temp, and moving the ceiling down without changing total E increases the temp.

    I ran the sim overnight with 500 very small molecules and the reverse gradient is gone. I have a small positive gradient, with the “no walls” option on, and less of a gradient with a closed container.

  4. Q. Daniels says:

    br1 wrote:
    This time, the floor is uniformly heated to 10 K, the ceiling uniformly cooled to 1 K, with gravity present. The 1200 molecules spontaneously arrange into a convection cell(s), despited the lateral symmetry (see bottom left figure)! The cell can sometimes go CW, sometimes CCW, depending on what the starting noise was, and sometimes you get double cells. So this model is really quite useful for ideal gas visualisation.

    If you had less of a gradient, or more molecules, it should drop into a chaotic oscillator, going one way, pausing, and then going in a random direction, including the double cell. Pretty standard non-linear dynamics.

  5. br1 says:

    ferd berple:
    “I’ll need to have a detailed look at the algorithm you use to recalc the min time list.”

    yes, this is where the subtleties are.

    Initially I did what you did and recalculated everything each time, just to be on the safe side. As that was very slow (as you are finding), I spent quite a bit of time debugging the technique where only the molecules which collided are updated. That was harder than it sounded, but it seems like the code works.

    I can also see a few other smart ways to reduce the recalcuation time, so if I get to rewrite this in Java I expect it to be about twice as fast as the current Matlab version, which can already handle 2000 molecules without having a heart attack.

  6. br1 says:

    Q. Daniels:
    “If you had less of a gradient, or more molecules, it should drop into a chaotic oscillator, going one way, pausing, and then going in a random direction, including the double cell. Pretty standard non-linear dynamics.”

    yes, it does this.

  7. ferd berple says:

    That was harder than it sounded, but it seems like the code works
    Yes, I added the logic to optimize “already calculated” and it immediately introduced lots of errors. The devil is always in the debugging.

    What I do find interesting is that overall, calculating the future events rather than cleaning up the past makes the code quite a bit simpler.

  8. br1 says:

    In trying to get a DALR, I went back and calculated what I should expect to see.

    We have a 2D gas, and from Velasco1996, this gives

    T = E/(2.N.kB)

    where E is total energy, N is number of molecules and kB is Boltzmann’s constant. The equation for specific heat is


    where M is the total mass and Cv is the specific heat at constant volume. One can simply put these two equations together to get



    where m is the mass per molecule.

    Cp is related to Cv by with f=2 as we are dealing with a 2D gas so γ =2 (not γ =1.66 as I said above). This gives


    Taking m=4.8e-26 kg for a typical air molecule this gives our 2D gas

    Cp=1150 J/kg/K

    which is basically the same as the measured value. Hence the DALR in the model should be


    = 0.009 K/m

    pretty much the same as observation, even though our gas is 2D not 3D.

    However, in the sim I am getting nothing close to this. I have tried all sorts of different thermal gradients (at ground level only (imposing a thermal gradient with height is almost like cheating!)), and used fans pointing in different directions (including horizontal), but I have yet to get a robust temperature drop-off with height. Indeed, every non-equilibrium circulation I try gives an increasing temperature with altitude. I put this down to the barometric formula – as density reduces with altitude according to n=n0.exp(-mgh/kB.T) then regions with higher temperature will throw more molecules to higher altitude than regions of colder temperature. Hence the average molecule could be equally hot or cold at low altitude, but a molecule at high altitude will much more likely have come from a hot region, and as temperature is approximately conserved with height, then the higher regions will be hotter. This reproduces the result that in a closed room the temperature is usually hotter at the ceiling than the floor, even though the greater atmosphere is not like that. I don’t count Velasco Eqn(8) as a robust gradient, as this goes to zero with greater molecule number – I need something much greater than that.

    The only way I can get a decent temperature fall off with height is to have heating at the bottom and cooling at the top. It’s starting to seem to me that (as someone else around here said!) the main reason the atmosphere is cooler as one goes up in altitude is not due to gravity or circulations per se (which I’ve been trying to simulate), but due to radiation losses to space – this cooling then allows the circulations to be set up, and for stable and unstable lapse rates to be established. I’ll play around with applying thermal gradients next and see what the resulting circulations do.

    It may be that I need much greater particle number, due to the question ‘what is a parcel anyway?’, but the parcel still must dump its energy at high altitude, as it performs work, so unless that energy can be dissipated it will still leave the system hot on top.

  9. ferd berple says:

    It’s starting to seem to me that (as someone else around here said!) the main reason the atmosphere is cooler as one goes up in altitude is not due to gravity or circulations per se (which I’ve been trying to simulate), but due to radiation losses to space
    Doesn’t this lead to a contradiction of current GHG theory?

    N2 and O2 are essentially non radiating at 300k. Thus, if cooling is due to radiation, then without GHG the atmosphere would be warmer at altitude than present, which would reduce convection from the surface making the surface hotter.

    Thus, if GHG are the reason the atmosphere is cooler with altitude, then GHG must provide a net cooling effect at the surface, contrary to the belief that GHG warms the surface.

    The lapse rate then becomes the maximum amount of cooling permitted before gravity and convection step in to limit the rate of cooling.

    Thus adding GHG to the atmosphere will have no effect on temps, because the only effect it can have is to provide a net cooling, and this is already limited by the lapse rate

  10. Bryan says:

    ferd berple says

    “Thus adding GHG to the atmosphere will have no effect on temps, because the only effect it can have is to provide a net cooling, and this is already limited by the lapse rate”

    This is very much in line with a recent post by Leonard Weinstein at scienceofdoom.

  11. br1 says:

    ferd and Bryan:
    “This is very much in line with a recent post by Leonard Weinstein at scienceofdoom.

    I read that article, and got the exact opposite meaning – adding GHGs will result in an increase in surface temperature. I didn’t read all the discussion (nearly as long as this thread!!!), but I didn’t see any comments that came to any other conclusion.

    Weinstein’s main argument, as I read it, was that GHGs make the atmosphere more opaque to radiation around 10 um. Hence the 300 K thermal radiation from the surface has a harder time escaping, so the surface temperature needs to increase in order to maintain the same power loss.

    That GHGs are better emitters as well as absorbers increases the emission from the upper atmosphere, so the ‘top of atmosphere’ moves to higher altitude, but this just makes the atmosphere ‘thicker’. The discussion about ‘what does back radiation do anyway?’ did not change the general conclusion, it was more a question of how one talks about it.

    By the way, during my discussion with Doug Cotton on this blog, I wrote an atmosphere radiation transfer model, downloaded the Hitran absorption coefficient database, and set up a simple planar atmosphere to address this question. It had the general property that increasing CO2 increased the surface temperature. I wouldn’t like to say by how much (due to the various simplifications), but there was no way the surface cooled. I can send that on to ferd if you like.

  12. Bryan says:


    Leonard’s main point was that ‘backradiation’ did not heat the Earth surface.
    IR active gases like CO2 and H2O allow the Earth to cool at the TOA.
    Adding some more CO2 simply lifts the troposphere a little.
    Leonard did a calculation showing showing that the resulting increased surface temperature was minimal and would hardly be measurable.
    The insulating effect of the atmosphere moderates the excessive high and low temperatures.
    Solar radiation at the equator would produce temperatures of around 120C.
    Its pretty clear that in this respect the atmosphere has a cooling function.
    At night in the Antarctic temperatures would drop even lower without an atmosphere.
    The insulating effect of the atmosphere works in both directions reducing heat flow in and out.
    Gravity and heat capacity (Cp) determines the lapse rate of dry air.

  13. ferd berple says:

    If the GHG already has a net cooling effect already – TOA cooler than surface – adding more GHG cannot somehow reverse this and have a net warming effect. Unless the amount of GHG was somehow at a magical minimum/maximum for cooling.warming.

    So, the argument that GHG is causing the net cooling at the TOA as compared to the surface means that any blockage of outgoing radiation is more than offset by increased radiation and blockage of incoming radiation. Or the current GHG would have already caused the TOA to be warmer than the surface.

    The sim does show that there is a gravity PE/KE warming cooling, but so far we haven’t found a way to use this to recreate the DALR.

    This doesn’t necessarily mean that radiation is the cause of cooling TOA, as we would need to demonstrate the DALR using a heater at the surface and a cooler at TOA.

  14. br1 says:


    I was surprised at your previous post, so I went back and read the linked thread in a little more detail, paying attention to your posts. All I’ll say is that I think you have substantially misinterpreted the original post. However, this thread is not the place to start all that again, but I’d be happy to discuss this elsewhere if it comes up again.

    In the mean time, I’m starting to see a real DALR in our 2D gas sim. This does not contradict my DALR post from yesterday! More shortly.

  15. Bryan says:


    With no IR active gases in atmosphere.

    Means at night

    Earth surface cools by radiation
    Air near Earth surface cools by contact with surface
    Dense layer forms cutting out the possibility of convection.
    At higher altitude N2 and O2 stuck at high temperature without means to cool down
    Air at top of troposphere warmer than at bottom.
    An inverted lapse rate.

    However there are active gases in atmosphere (CO2 and H2O)
    This gives a path for N2 and O2 to cool by collisional activation then emission of IR radiation.
    This cooling effect makes possible our lapse rate.

    Agree with you

    “However, this thread is not the place to start all that again,”

    Good luck with your simulation.

  16. Q. Daniels says:

    I wrote:
    If you had less of a gradient, or more molecules, it should drop into a chaotic oscillator, going one way, pausing, and then going in a random direction, including the double cell. Pretty standard non-linear dynamics.

    br1 wrote:
    yes, it does this.

    We’re talking about chaotic systems. Around some threshold, they will switch from one mode to another (ie laminar versus turbulent). The model densities (MFP) and gravity levels I was looking at earlier were more typical of the thermosphere than the troposphere.

    The only times I saw something approaching a DALR was when gravity was turned down to almost nothing with thousands of particles. At that point, the vertical fluctuations were roughly equal in magnitude to the horizontal fluctuations.

    If the fluctuations scale with N^0.5, and we work with cubic volumes, then we’re still uniform over all scales (z^3/z)^0.5.

    Thinking further along those lines, I found this paper:

    I started digging into it, but I got distracted.

    My thought for how to go forward is Navier-Stokes for a compressible fluid (since density does change), and Fluctuation Theorem (equilibrium is the average of the fluctuations).

  17. br1 says:

    Hi Q.

    “My thought for how to go forward is Navier-Stokes for a compressible fluid (since density does change), and Fluctuation Theorem (equilibrium is the average of the fluctuations).”

    While I know of the Navier-Stokes equation, I have never really worked with it. One thing that puzzles me about it (and for example the paper you linked, or Bernouilli’s equation etc) is that there is no mention of temperature? Or am I wrong?

    It is all very well for the parameters that go into it being temperature dependent, for example viscosity has a temperature dependence, but how does one get temperature *out* of these equations?

    A quick search for Navier-Stokes and temperature came up with this example , but I don’t think this helps, as it only finds a temperature profile after a temperature difference is imposed. If there was no *applied* temperature difference, then it seems the solution would be isothermal. Maybe.

  18. br1 says:

    “In the mean time, I’m starting to see a real DALR in our 2D gas sim. This does not contradict my DALR post from yesterday! More shortly.”

    Ok, here is the general result:

    I wrote a commentary along with the picture:
    “In this 2D kinetic gas model, a heater is placed on the floor, shown by the red line in the left two figures. There is an 80 K difference between the heater and the rest of the floor. One can see from the bottom left figure that this sets up a circulation, and from the bottom right figure that the temperature with height varies above each section of the floor. In the bottom right figure, the red trace shows temperature with altitude above the heater – this updraft gives a temperature which drops off with a lapse rate given by DALR=g/Cp=g.m/(4*kB) where g is the acceleration due to gravity, m is the mass per molecule and kB is Boltzmann’s constant. This equation gives the thin black line. The blue curve shows the temperature above the right hand side of the floor where there is a downdraft. In the case of a pure DALR, the temperature should increase as the gas descends, but as the circulation is over a cold section of the floor, conduction imposes that the temperature must decrease. So while conduction is undoubtedly significant, one can see that the rates in the updraft and downdraft are different, implying an adiabatic component. At altitudes above 1000 m, where there are no updrafts or downdrafts, the temperature remains isothermal with altitude and is equal across the different regions. This approximately reproduces the troposphere and thermosphere behaviour. One difference between this model and the real atmosphere is that the average temperature of the whole gas actually increases with height. To get a net reduction in temperature with height, one needs to lose heat from the higher altitude layers. This is done in the atmosphere by radiation to space, but that is not included in this model.”

    So I sort of got the troposphere/thermosphere behaviour I was looking for.

    It is clear though that as it stands the model won’t reproduce Graeff’s result, despite the presence of gravity.

  19. br1 says:

    Just an update to say that conduction effects are probably way too high in the previous figure to give a good example of DALR. Still, one can see a circulation region up to a certain altitude, above which there is an isothermal non-circulating region, all driven by temperature differentials.

    From playing with the model, I’m wondering whether one can get a DALR based on steady-state thermal sources. It may well be the case that the day/night heating and cooling also plays a very significant role, as this could generate ‘pulses’ of thermal energy which may be needed to generate a ‘parcel’ of air which can lift and expand into the surroundings. If there are only steady sources, then one gets circulations, but one ends up with a steady solution which has already compensated for the different pressures at different altitudes, so after a while no further expansion work is done. At which time one gets conduction temperature differences but no work temperature differences. Maybe.

  20. ferdberple says:

    I never did get the MATLAB copy optimized version of the sim going. I installed code to check for overlap on the molecules and it found that some collisions were bring missed. I see in the matlab there is also code for spurious collisions, which suggests that maybe MATLAB is seeing some problems. I also get spurious collisions in my optimized recalc code.

    my optimized code marks every molecule for recalculation that collides with anything, and every molecule that is predicted to collide with this first molecule, recursively until exhaustion, and still it is missing some molecules that need recalculation.

    I have a single switch I can change on the fly between full and optimized recalc, and the test routines show no errors over days of running with full recalc, and within less than a minute at reasonable speed I start seeing reports of overlapped molecules or spurious collisions when I switch to optimized recalc.

    this suggests that maybe calculating which molecules need recalc is more difficult than might first appear. I haven’t been able to work out what is going wrong. there must be condition where a molecule needs recalc, that is not being detected.

  21. ferdberple says:

    Also, using the sim in full recalc mode, I do get a small positive gradient. warmer at surface. not as large as graeff, but it is consistent.

  22. ferdberple says:

    ps: the optimized recalc still makes mistakes if you run the sim slowly, it only takes longer for them to appear. the underlying problem appears to be in predicting which molecules are going to need a recalc when a collision (wall or other molecule) takes place. in theory only those molecules that have a relationship to the affected molecule, as well as the affected molecule should be involved in the recalc, but after very many attempts to get this right in the code it doesn’t work reliably.

    it appears that under some conditions that this doesn’t work. even when you use a trivial sim with only 3 molecules, at some point in time it fails unless everything is recalced on each collision. there doesn’t appear to be anything consistent in the proceeding events, except perhaps the failures are more common when collisions are complex.

  23. br1 says:

    Hi ferd,

    Thanks to your pursuit of this, I can see one flaw in my recalc, but I haven’t had time to fix it. I’ll let you know how I get on.

    By the way, when you say that an overlap is found, what is the per cent? Do you find 1%, 10%, or even completely overlooked collisions? I’ll put in overlap detection in my code and see what I get.

  24. br1 says:

    ok, think I solved the collisions (again!) – I’ve sent you another copy. From my first tests, I can’t see any errors. What do you think?

    The overlaps I saw were on the order of 1e-14 or less, which is just double precision error. What is your condition for ‘fail’?

    Note that lines 178-181 calculate the gradient expected from Velasco1996, but also see my comment above, at where finite sized molecules will be slightly cooler and give a (slightly) steeper gradient than Velasco.

  25. ferd berple says:

    I’ve added this as a test, as it was similar to the test I used. I’m not hearing any beeps so it looks like you have it working.

    if normd<2*r-.00001
    %(2*r-normd)/(2*r) % check if there is any significant overlap

    But the speed appears to be no better (or worse) than recalc all?

    I am seeing a small negative gradient as per Velasco. Full recalc still gives reasonable speed in java w 300 mols. much more and it bogs down.

  26. br1 says:

    Hi ferd,

    Glad we appear to be making progress.

    In the Matlab version I sent you, I turned down the plotdt and storedt values in order to slow the sim down to try to see any collision errors by eye. Now we don’t seem to be getting any, we can turn these back up (say to plotdt=0.01;storedt=0.005; or higher). I can still run the sim with 2000 molecules in a fairly dense packing without it being painfully slow (using 25% CPU time on a 4 processor PC) so could go a bit higher if needed. I expect Java to be faster than Matlab, which is a semi-interpreted language. Which reminds me, I must give Java a go again, I can think of quite a few computational short-cuts I could take instead of doing full Matlab matrix calculations (which are more convenient but do some unnecessary stuff too).