Met Office, HadSST3, CRUTEM3, HadCRUT3, questions over gridded coverage

Posted: September 19, 2014 by tchannon in Analysis, Dataset


The UK Met Office / Hadley Centre (Met Office) / Climatic Research Unit (UEA) construct and publish global time series for temperature based on published 5 degree gridded. How this is derived from land meteorological station readings and ship board for sea surface temperature is unclear. The gridded to eg. global is a simple (cosine) weighted average which takes into account the variable area of a linear grid representing a sphere.

I have put together maps showing the data counts for decades over a world shore outline. These are provided as vector plots (master work), PDF, or for casual looks, PNG. The results are disturbing and particularly in the light of the Met Office producing 100 different versions of HadSST3. “Each of the following files is a zip archive containing ten realisations of the HadSST3 data set. There are 100 realisations in total.”

Do I detect obfuscation, flapping for distraction?

[update] Roger Andrews has pointed out this work ought to use HadSST2, my mistake. I’m not sure what to do about this, updating the files is not too difficult but is there a material difference? I’ve created new files and am looking at introducing the hadcrut4 data.[update]



Image left is the cell over the majority of France, dates 1950..1959. Black squiggle is coastline, red squiggle national border. HadSST3 has 60 months of data. How exactly is the sea temperature used?




Here is another, majority of area is land, not sea.

The situation has improved with time, earlier on there is what looks like highly questionable instances.



I have a database here containing gridded datasets in a common format. An extract from any grid cell can be done, time series. Given the amount of code involved in this whole work treat me as unreliable.

Sanity check

An area of Europe was selected for time series extraction where this includes an abnormal weather event. See




Plots of grid counts

Master file is the PDF, PNG is only for those who need to use an image, less legible, use save to file. File sizes have been minimised: PDF about 600kB, PNG (3175 x 2246) about 500kB. Full sheet will just print legibly on A3 paper.
With PDF you are expected to use magnify as required.

1850..1859 PDF PNG
1860..1869 PDF PNG
1870..1879 PDF PNG
1880..1889 PDF PNG
1890..1899 PDF PNG
1900..1909 PDF PNG
1910..1919 PDF PNG
1920..1929 PDF PNG
1930..1939 PDF PNG
1940..1949 PDF PNG
1950..1959 PDF PNG
1960..1969 PDF PNG
1970..1979 PDF PNG
1980..1989 PDF PNG
1990..1999 PDF PNG
2000..2009 PDF PNG
2010..2013 PDF PNG

The last decade is short, 2010..2013, see notes.


There isn’t one. This work is informational where I hope others will step in.

Within reason I can extract data and make it available. The sheer size of data should be kept in mind. there are >2,500 grid cells.


The cutoff at the end of 2013 is pragmatic. At the time of writing the three underlying datasets finish at slightly different dates during 2014. Omission is the easy option, has no material effect on the result.
Data providers
HadCRUT3:- Met Office
HadSST3:- Met Office
CRUTEM3:- Met Office
Climatic Reserch Unit page on temperature datasets
Images are marked copyright to enable copying, otherwise the situation is in legal limbo which stops strict usage.
Coastal vector data from NOAA GEODAS
The resolution and colour usage in the plots is a compromise. I don’t like visually hitting people in the face, technical work where function comes first. The figures are low key so the shore outline shows through. If one is not legible for PDF magnify more.

This post also appears on my own blog.

Post by Tim

  1. Roger Andrews says:

    Here’s the real obfuscation:

  2. tchannon says:

    Thought you might be along 🙂

  3. Roger Andrews says:

    Well, I’d hate to disappoint you Tim, but please read it.

    And I think you’re talking about HadCRUT4 and CRUTEM4, not HadCRUT3 and CRUTEM3, correct? The “3” series goes with HadSST2 as I recollect.

  4. tchannon says:

    The downloaded current files are all 3. There is no gridded 4 on the system here.

    hadsst2 and crutem2 are present. I’ll check to 2 / 3 tomorrow. Would be annoying but switching to sst2 assuming there is post 2012 update would be simple.

  5. Roger Andrews says:

    HadSST3 and HadSST2 are up to 0.2C different over the 1955-59 period, so you could have some adjustment-generated mismatches with CRUTEM3 and HadCRUT3.

  6. Paul Vaughan says:


    There isn’t one. This work is informational where I hope others will step in.”

    Eminently sensible. Refreshing. I applaud this approach.

  7. Paul Vaughan says:

    RA shared this:

    It’s a classic example of Simpson’s paradox.

    In that pattern, sampling intensity is confounded with 2 leading modes of spatial pattern:
    a) north-south land-ocean asymmetry.
    b) AMOC.

    Tim’s “Plots of grid counts” show this clearly.

    Vaughan Pratt is seriously misinterpreting land-ocean contrast evolution. The climate discussion is severely hindered by irresponsible ignorance of spatial pattern and spatiotemporal confounding. Worse: The solar-terrestrial thought police (who are militantly active on sites like WUWT & CE) deliberately deceive with a narrative based on false assumptions about spatial pattern.

  8. Paul Vaughan says:

    Has everyone taken a look at Mann+ (2009)? If not, please take a look at the temporal patterns and answer me seriously: What is your impulsive first instinct about the adequacy of their sampling in the southern extra-tropics???

  9. Paul Vaughan says:

    Tim: If you find time to color-code the “Plots of grid counts” (on a scale from cool blue to red hot through neutral black or white), I’ll be appreciative if you share. (This will speed up by orders of magnitude the rate of human cognitive processing.)

    RA: When I worked in ecology we ran into sampling challenges orders of magnitude worse. With awareness of natural clumping and good instincts about aggregation criteria, difficult questions can be answered economically with sparse sampling. As an example of sparse sampling that accurately isolates a leading mode of variation, here’s an interannual summary of ICOADS wind:

    The data certainly aren’t useless.

    Thanks for sharing stimulants — much appreciated.

  10. Paul Vaughan says:

    Mann+ (2009):

    “The reconstruction skill diagnostics suggest that the MCA and LIA reconstructions are most reliable (Fig. 2) over the Northern Hemisphere and tropics, and least reliable in the Southern Hemisphere, particularly in the extratropics.”

    Mann, M.E.; Zhang, Z.; Rutherford, S.; Bradley, R.S.; Hughes, M.K.; Shindell, D.; Ammann, C.; Faluvegi, G.; & Ni, F. (2009). Global signatures and dynamical origins of the Little Ice Age and Medieval Climate Anomaly. Science 326, 1256-1260.

    Click to access MannetalScience09.pdf


    Mainstream climate models hard-fail (decisive grade F) on spatial pattern.

  11. Roger Andrews says:

    PV: difficult questions can be answered economically with sparse sampling

    Indeed they can. But calculating mean global temperatures is not a difficult question. You can in fact select, say, twenty well-spaced stations at random from the ~8,000-station GHCN data base and from them you will get a global time series which will be a close match to the series you get using all 8,000 stations.

    There are sampling challenges orders of magnitude worse in mining too, such as uranium, placer gold and (particularly) diamond deposits. But we manage to get the right answer most of the time.

  12. tchannon says:

    Roger Andrews,

    The intent was HadCRUT3 and you are correct I should use HadSST2, my mistake.

    SST2 is part of the database here if not updated recently.
    After some figuring I’ve complained again to the Met Office, screwed up their (Fedora) servers again (last time same problem, lame reply)

    Workaround is possible by hand. Yes there is later SST2 data.

    Question now arises of what to do. I can fairly easily rework using SST2, only holdup is I’ve shunted intermediate files into separate directories and have to add code to put things in the right places.

    HadCRUT4 is a mess, there is no best version, just statistical nonsense. Technically there is no valid computation of global given the data. (the elephant is nyquist)

    That leaves fudges. If the underlying idea is post data dither then say so, not that it is valid. Adding noise to data then removing it again does not change anything.

    Assuming the Hadcrut4 data is reasonably compatible I might be able to do something. Is there any point if no-one is going to use it?

  13. Paul Vaughan says:

    Aggregation criteria are the spice of climate discussion.

    The surface is a thin interface between higher heat capacity below and higher kinetic energy above.

    Leaving known systematic spatiotemporal sampling biases uncorrected would be a serious error. But sensible correction is challenging. (ERSST does the best job on public offer to the best of my current knowledge & awareness.)

    Ignorance of external bounds on latitudinal-gradient-(i.e. wind)-driven vertical mixing (includes welling, evaporation/precipitation, ice transport, etc.) is a serious obstacle to more sensible conceptual frameworks than “global surface average” can ever afford.

    Recommended Reading:

    Davis, B.A.S.; & Brewer, S. (2011). A unified approach to Orbital, Solar and Lunar forcing based on Earth’s Latitudinal Insolation/Temperature Gradient. Quaternary Science Reviews 30(15-16), 1861-1874.

    Davis, B.A.S.; & Brewer, S. (2009). Orbital forcing and role of the Latitudinal Insolation/Temperature Gradient. Climate Dynamics 32, 143-165.

    Click to access 72e7e51a6448a2e1d7.pdf

    (Access Tip: Try copying/pasting that into a new browser tab if simple clicking leads to a redirect (to something other than the pdf) — I’ve tried this work-around successfully.)

    Davis, B.; Mauri, A; Kaplan, J.; & Brewer, S. (2009). [Poster] Which orbital forcing caused the mid-Holocene thermal optimum?

    Click to access pub_2009_Thermal_optimum2poster.pdf

    Tim has the right idea:

    There’s no compelling need to immediately answer (in comically oversimplified political-binary) every single exploratory question raised. We can stew on it….

    Best Regards

  14. tchannon says:

    Not too hard twiddling the code is now chewing through generating pdf and shrunk png using sst2.

  15. Tim Hammond says:

    Are there any well-sited sites with a reasonably long history that show all this warming without any adjustments?

    If so, how many?

    And if there are a lot, why do we need all this stuff?

    If there are not, then we actually do not have any real data that shows warming of the scale and speed claimed.

  16. tallbloke says:

    Tim H: The problem is complex, due to instrumentation changes at the same sites as well as site changes.

  17. Paul Vaughan says:

    John S. is a sensible, infrequent commentator at CE. Exceedingly-rare sensible-commentary at CE gets lost like a needle-in-a-haystack. (Similarly, Bill Illis commentary at wuwt is worth noting.)

    Here’s some John S. commentary that’s worth noting here:
    John S. | July 30, 2014 at 9:35 pm |
    The basic statistical model specified by Roman M and used by BEST is structurally devoid of any sensible definition of the unknown “average regional temperature” that it attempts to estimate by assigning fixed “offsets” to each station record. Thus, by default, spatial homogeneity along with temporal coherence is tacitly assumed for the entire globe. Such a simplistic model can only satisfy an academic mind totally bereft of geophysical experience. No matter how clever the PR campaign to defend BEST’s globally-kriged data products, they are an egregious boondoggle.
    John S. | August 1, 2014 at 6:04 pm |
    In the conceptual model

    data = regional ave. + constant local offset + noise

    nowhere are the spatial limits of what constitutes a “region” operationally
    specified. Without any such specification, what data series enter into the
    estimation of the regional average becomes a matter of programming fiat.
    Because in many regions of the globe the spatial coverage is very poor,
    data series from very remote locations are per force utilized in the
    dynamic programming algorithm that minimizes the squared discrepancy between
    station data and the regional estimate by adjusting offsets.

    Extensive geophysical experience, however, indicates that not only does the
    average temperature level change from station to station, but so does the
    year-to-year variance and indeed the spectral structure of such variation.
    While spatial homogeneity (aside from fixed offsets) may be a tenable
    assumption over distances of several hundred kilometers in the absence of
    strong topographic and maritime influences, it cannot be relied upon in the
    general case. Especially in mountainous country near coasts, the
    time-series of temperature variations <100km apart often lack the strong
    inter-station coherence that is necessary to construct reliable time-series
    of "regional averages." And without strong coherence, there is no way of
    combining snippets of record into meaningful time-series of greater length.

    Hope this clarifies my critique.

    I can confirm from first-hand diagnostics that BEST spatial patterns are corrupted. (I'm not suggesting this was done intentionally.) The assumptions implicitly (and never stated explicitly) built into their spatial aggregation methods are naive. Why didn't they do some simple diagnostics to easily discover this themselves? (Don't know.)

  18. tchannon says:

    That was fun (not), done it. Computer is writing out a new plot set for hadcrut4 as I write this. Already produced a corrected for hadcrut3.

    Question is what to do with them. A new article will only annoy people. Not posting a new article will fail to get the correction noticed.
    Doubt there is a material difference in the results.

  19. Paul Vaughan says:

    Tim, if you’ve got color-coded .png maps for each of the 3 grid categories, please share. Thanks.

  20. tchannon says:

    Anything done has to be done the hard way by me. Just about every library known is broken in one way or another. Area maps are particularly cantankerous.

    Originally I produced a map for each dataset. Then it occurred to me it ought to be easy to put three in one, bad idea… Bizarre bugs appeared, not mine, plotter tool, took more than a day of solid work to workaround that one, kludge. (about fonts doing crazy things, reported and unfixed problem)

    Changing text colour would probably work and since I write the script which writes the script which plots (wheels within wheels).

    As it stands doubt that would be visible and three conflicting colours?

    Lets put this a different way, what is the objective?
    We know coverage varies with time. We know whole areas are blank.

    FYI CRUTEM4 has zero data in the southern hemisphere during 1850, another story perhaps.

  21. tchannon says:

    Updated with bundled in zip HadCRUT3 and HadCRUT4 PDF, see article.

  22. […] I used an incorrect mix of datasets, see Talkshop thread here. Corrected PDF and now expanded to include […]

  23. David A says:

    You folk are great with detail in an area vastly overcomplicated by time and human nature. I have a simple question regarding GISS data and their anomaly base period.

    One would assume that the GISS global mean anomaly base never changes, as it is based on the SAME past, 1951 to 1980 period.

    Now if the past anomaly basis, 1951 to 1980, is being changed, ( and they do continue to retroactively change the past, including this period) then current maps may be based against a different anomaly, even if it is the same period. (Indeed, if you were to retroactively cool that past anomaly base period, then new maps based on a different anomaly would appear warmer, relative to a now cooler past.) Which brings up a question… If they are changing the past, does the base anomaly change?

    I bring this up because GISS only states that they base the anomaly on this period, but they do not specify which version of that period is being used. I assumed they would use the original base period (before later changes to that period) for all anomaly readings. But then I remembered that this is climate science.

    thanks in advance.

    Oh, here is a chart of GISS changes (sans the dates of those changes)

  24. Steven Mosher says:

    read the papers.

  25. David A says:

    Steven Mosher says:
    September 22, 2014 at 11:53 pm

    read the papers.
    Mr. Mosher, is that a response to my question? There are lots of papers in this world, so a little less cryptic response would be better. Are you retired? I work fifty plus hours a week. I have looked through some of the official GISS online information and did not find a direct answer.

    Did you? Perhaps you would be so kind as to direct me to it.

  26. David A says:

    OFF topic but please check it out.
    The error margins for surface GAT just increased 100 fold.

    Click to access E___E_algorithm_error_07-Limburg.pdf

  27. tchannon says:

    “Thursday, September 25, 2014
    New paper finds global temperature data trend prior to 1950’s “meaningless” & “artificially flattened”
    A correspondence published today in Nature Climate Change is a damning indictment of
    the updated HADCRUT global temperature database, which is used as the basis of all of the other land-based temperature databases including GISS and BEST. ”


    “Phil Jones was able to determine the 1850 southern hemisphere temperature to three digit precision, from a single thermometer in Tasmania. The graph above shows the complete time series since 1850 for Tasmania.”