Just How Bad Is The USHCN Data Tampering?

Posted: June 30, 2014 by tallbloke in solar system dynamics

.
.
Ohhh dear. Steve Goddard has found a big divergence between real and ‘estimated’ data in USHCN temperature ‘dataset’.

Real Science

According to the USHCN V1 docs, they were done adjusting after 1990.

ts.ushcn_anom25_diffs_urb-raw_pg

ts.ushcn_anom25_diffs_urb-raw_pg.gif (650×502)

According to the V2 specs, they use the same TOBS algorithm as in V1. So it seems safe to assume that stations with no missing data after 1990 need no adjustments.

1990 was also the year when they started exponentially losing station data, and started doing a lot of infilling.

So I did an experiment. I calculated the post-1990 measured temperatures for all stations with no missing data, and the post 1990 temperatures for all of the fabricated data. The fake data is diverging from the real station data at a phenomenal 5.3 degrees per century.

ScreenHunter_693 Jun. 28 20.18

That huge spike in temperatures after 1990 which NCDC shows (and  I asked you to bookmark last night) is almost entirely due to fake data. Unbelievable.

ScreenHunter_680 Jun. 27 21.17

I’ve been talking about the discontinuity after 1990 for a long time, and there you…

View original post 2 more words

Comments
  1. DBD says:

    Yee Haa!

  2. Konrad says:

    “According to the V2 specs, they use the same TOBS algorithm as in V1.”

    In 1985 Tom Karl had a paper proposing a program for TOB adjustment that did not require individual station metadata. The paper’s conclusion specifically mentioned global warming/ climate change…

    Yet over at notalotofpeopleknowthat, Nick Stokes is making the claim –

    “TOBS is applied on the basis of what information they have about changes to OBS time.”

    Time for the racehorse to visit the glue factory?

  3. tallbloke says:

    On his earlier article, Watts said:
    ” the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing. When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point.”

    But if the divergence is this bad, it seems highly likely the other nearby data being selected on the basis of being to the warm side, which implies the implementation of a deliberate policy.

    What else might explain it?

  4. Guam says:

    Any decent Postgrad researcher confronted with data this dirty would be required to throw it into the circular filing receptacle.

    The reality is the Temperature record is well past salvation, they can play games and attempt to fix it all they like, the reality is they cant.

    Which ultimately means they cannot make any argument based on the historical record as no one can know what this is any longer.

    This is awful and about as unscientific as anything can become imho!

  5. Gail Combs says:

    Verity Jones posted a similar graph to that of Steven Goddard’s back in 2010 when she and E.M. Smith got together to look at the “Station Dropout Problem” back in the winter of 2009-2010.
    (Tallbloke you might want to put that graph up so people can see it.)

    [Mod note:] Done. here you go – TB

    A.W. was well aware of the problem since he posted in March of 2010 On the “march of the thermometers” referencing E.M. Smith’s work and the first comment has a pointer to Verity Jones’ site Digging in the Clay.

    Of Missing Temperatures and Filled-in Data (Part 2)
    diggingintheclay.(DOT)ordpress.com/2010/03/02/of-missing-temperatures-and-filled-in-data-part-2/

    How many times to we have to flog this same dead horse? It is getting really stinking and gross by now.

  6. catweazle666 says:

    Popcorn time…

  7. Doug Proctor says:

    The huge dropout of data is coincident with the increase in estimated data. That says that the algorithm needs a certain data distribution which is unavailable due to station dropout, so it creates the reference points it needs. There was clearly a data meeting in 1990 that determined using the majority of the 1989 and earlier stations was a problem, that a “better” result could be obtained by the new method, estimated points and all. But the station dropout wasn’t applied all the way back. So how is the analytical system working in toto.

    Now, I would expect the new method is what they want to use, the important decision coming from that meeting ‘n all. And they would want just ONE system of analysis, so they could figure out what they did. But would this new system be appropriate to apply to the previous station data, i.e. with a different distribution than the new and improved version?

    It is all becoming very unclear what it is we are looking at with the temperature data. This is the point where in my business we would tear up all the maps and start over, because we no longer understood what it was we were looking at.

    [Actually, in my business we would file all the old maps and let the project go quiet until a new Vice President took office and could restart the project with new people, a new “perspective” that could dismiss what was done before with a faux legitimacy that the prior work was oranges to todays apples.]