Clive Best: Evidence that Marcott’s up-tick is an artefact

Posted: March 25, 2013 by tallbloke in Analysis, Dataset, Measurement, methodology, Natural Variation

I have calculated from scratch the global averaged temperature anomalies for the 73 proxies used by Marcott et al. in their recent Science paper. The method used is described in the previous post. It avoids any interpolation between measurements and is based on the same processing software that is used to derive Hadcrut4 anomalies from weather station data. Shown below is the result covering the last 1000 years averaged in 50-year bins. I am using the published dates. Re-dating is a separate issue discussed below.

Figure 1: Detail of the last 1000 years showing in black the global averaged Proxy data and in red the HADCRUT4 anomalies. The proxies have been normalised to 1961-1990. Shown in blue is the result for the Proxies after excluding TN05-17.

Figure 1: Detail of the last 1000 years showing in black the global averaged Proxy data and in red the HADCRUT4 anomalies. The proxies have been normalised to 1961-1990. Shown in blue is the result for the Proxies after excluding TN05-17.

There is no evidence of a recent uptick in this data. Previously I had noticed that much of the apparent upturn for the last 100 year bin was due to a single Proxy : TN05-17 situated in the Southern Ocean (Lat=-50, Lon=6). The blue dashed curve shows the 50 year resolution anomaly result after excluding this single proxy.

Figure 2 shows the anomaly data using the modified carbon dating (re-dating). This has been identified by Steve McIntyre and others as the main cause of the up-tick. However I think this is only part of the story.

Figure 2: Global temperature anomalies using the modified dates (Marine09 etc). proxies are averaged in 50 year time intervals.

Figure 2: Global temperature anomalies using the modified dates (Marine09 etc). Proxies are averaged in 50 year time intervals.

The new dating suppresses the anomalies from 1600-1800. There is a single high point for the period 1900-1950. The much larger spike evident in the paper around 1940 (see also here) is in my opinion mainly due to the interpolation to a fixed 20 year interval. This generates more points than measurement data and is very sensitive to time-scale boundaries. I believe you should only ever use measured values and not generated values.

There is no convincing evidence of a recent upswing in global temperatures in either graph based on the published or on the modified dates. I therefore suspect that Marcott’s result is most likely an artefact due to their interpolation of the measurement data to a fixed 20 year timebase, which is then accentuated by a re-dating of the measurements.

updated 24/3 : include re-dating graph


Comments
  1. Clive Best says:

    Thanks TB,

    The main point I want to make is that it is simply wrong to generate pseudo-data by interpolating the measured data to a 20 year time-base, especially because the resolution of those measurements is mostly 100-300 years! There is no uptick if you only use the measurement data. Steve McKintyre and others have focussed on the re-dating as being the cause of the uptick. However, the re-dating just compounds the original error.

  2. mikep says:

    My understanding was that its the coretop dating not the radiocarbon dating that makes a big difference

  3. Clive Best says:

    Yes the largest shifts in time were for core tops. So one proxy – MD95-2043 for level zero was shifted by 1000 years 1950. Marcott may even have a good reason for this shift , but anyway it did not contribute to the uptick. There were however some subtle re-dating effects which moved proxies to different 20 year bins making the uptick appear stronger. However this is only the case when you use the interpolated data. The actual measured data do not show any uptick in either the original or the re-dated data.

  4. Doug Proctor says:

    Right in the beginning the “sniff test” did not do well for the Marcott report. It is a good demonstration that critical thinking, not pure reliance on technically correct algorithms, is essential to all human understanding.

    You can’t cut the human out of the reasoning loop. We stopped using the GIGO (garbage in, garbage out) acronym when we were told for the four thousandth time by Jobs or Gates that their machines were smarter than us. Never smarter, just faster.

    But smarter sells more stuff.

  5. […] nel web climatico continua la polemica sulla ricostruzione in Marcott et al., 2013 (qui e qui su CM), grazie all’eco di […]