Laying the Points Out

Posted: July 2, 2014 by tallbloke in Analysis, Dataset, Measurement

.
.
Brandon’s take on the temperature record debacle

Izuru

There has been a lot of hoopla about supposed problems in the surface temperature record recently. Unfortunately, a lot of people don’t seem to understand what the hoopla is actually about. I think the problem is people have focused too much on rhetoric and too little on actual information. I’d like to try to reverse that.

View original post 1,359 more words

Comments
  1. colliemum says:

    All quite interesting, but still doesn’t address the one point which has upset so many of us: what is the reason for data from the 1930s being adjusted down, then a little bit up, then a bit down – like a jojo.

    I can understand the reasoning behind these adjustments for the present time, but surely there can be no scientific reason for altering the data from historic times, such as the 1930s, and altering them repeatedly, sometimes on a weekly basis, as readers on WUWT have shown.

    If climate scientists don’t trust those historic data, because of e.g. assumed measuring errors, then don’t use them at all in your models. Mind you – it then would be difficult to claim that the last month was the ‘hottest’ month since records began …

  2. Jaime Jessop says:

    I just can’t quite get my head around all this temperature data manipulation and the reasons (justified or otherwise) for doing it. But it strikes me that what this all boils down to in the end is an issue of democracy in the presentation of data; by which I mean, if data presentation in climate science were truly democratic, there would be no requirement for people like Goddard. The ‘uncorrupted’ raw trend would be freely available and clearly presented to the public in contrast to the adjusted temperature trend, along with an attempt to explain the reasons for the adjusted data in simple layman’s terms. This clearly has been lacking, which in itself raises suspicions quite naturally. Were it not for Goddard keep digging, probably none of this ‘hoopla’ would have been made public.

  3. A C Osborn says:

    I have left this commetn over on Brandon’s Post.
    “I just love this simplified explanation.
    It is so simplified it has lost the whole essence of what Goddard, Watts and many others have all found, they are not only talking about a few stations or a few values or just One adjustment and You cannot takes the first 3 points in isolation because they are all happening at the same time to the same data.
    There are whole swathes of Estimated values, where the “local” stations are also Estimated and changes to values on a Daily basis.
    There is Estimated values where there is no missing data.
    Brandon, I challenge you to look at the data yourself before making such “simplified”, “Sweeping” and derogatory statements.
    Look at the data and then justify what you find with what you have just written.”

    There was no notice by Brandon of all the other posters verifying what both Steve and Anthony Watts are claiming. I have looked at the final data and it is Crap.

  4. Doug Proctor says:

    Why do the estimated data points at the non-existent stations? Why not add estimated data points in all sorts of places?

    I’d suggest that the algorithms do exactly that. There is a gridding function. The data that exists is run through a function that creates a geographically “appropriate” grid, assigns values to each of the nodes, AND THEN adds everything/creates the map that is compared year-to-year.

    In my geological mapping world, those who wish not to map as per their best technical understanding, believing instead that the computer can do better (hah! you find what hasn’t been found by disobeying the obvious “rules” of contouring/observation, and finding something else that fits the data. Then you drill/test the prediction and if you are correct, Robert’s your father’s brother), these engineer-dominated, machines-are-less-prone-to-error people do exactly what I describe above:

    1. Post/locate the data.
    2. Get a gridding function set up, with nodal values.
    3. Map the grid.
    4. Erase from view the original data.

    The problem is this: COMMONLY, for a part of the map the contoured values, i.e. the gridded values, do not honour the hard data points!

    Yes, you can set up better or worse grids, we are dealing with a non-statistically valid data distribution. The lateral variations are too great for the data density: “clumping” is impossible to deal with properly.

    The geologists hide this fact (and may not know it) by removing the datapoints from the computer contoured and displayed maps. While the gross map is correct, the details may be wrong – while, again, averaging out.

    If Goddard and you are right, the government maps may have this gridding vs datapoint honouring problem. So individual stations are now incorrect, but the overall pattern is right. In oil and gas, this is not good enough, as we drill on the basis of specific details, not the overall trend. In climatology, the reverse may be true: we don’t care in Tulsa is exactly right, but we care that the Texas panhandle is hot as hell.

    I suspect many of the more minor changes we see are artifacts of regridding. However, if there is retention of a lot of previous nodal points, we could be seeing a cumulative, creeping error as adjustments are added to adjustments.

    I like hand contouring because you can take into account non-specific parameters. For the climate maps, that would be mountains, lakes, rivers, rain-shadows, deserts, UHIE etc. You don’t end up with weird temps in Peru because you have additional data you are considering. You won’t have rainfall predicted in dry areas that lack weather stations. But now you have a subjective element that can be argued.

    In an earlier post somewhere I wrote about Computational Reality vs Representational Reality. The first is what you get doing math. The second is what you would have if God put a datapoint every 15m. They can both be different but they are still “right” if we keep in mind the scale and type of error we are dealing with. There is, after all, only one true situation: the universe may have a myriad possibilities, but each moment is a fixed, specific one.

    But discussing the Representational vs Computational correlation, its meaning and use requires a lot of understanding and consideration, neither of which is helpful when your job is to generate large font headlines for a fundraiser.

  5. catweazle666 says:

    So, if the data doesn’t match the computer games models, naturally it’s the data that’s wrong, right?

    “Hide the Decline!”

  6. catweazle666 says:

    I wish this thing would let me edit after I’d posted!

    [Reply] Fixed

  7. wayne says:

    Brandon was in a way so wrong (or sly) in his simple example he posted, though such simplistic examples do make points easily grasped or misleaded.

    First, his stack of 1,2,3,4,5 series is an assumption of a great warming trend in itself, ‘1’ being cold, ‘5’ being warm and of course if the missing data points are always in the colder one or two values you will compute a value warmer than if the missing ‘2’s were actually recorded. If he would have chosen ‘4’s as being missing instead of the ‘2’s then a cooler than proper average would have ensued. But curiously that is exactly what you will find if you download the some 17 gigabytes from BEST of the unadjusted data and take a real look through the records noticing the flags for there are many many more missing values in the deep winter months than in the hot summers, sometimes weeks or months missing. Being hot outside never stops someone from reading the min/max for a day but in the winters the opposite is not true.

    So let’s take a more realistic simple sample of a group of records using a wave form or triangular pattern which more accurately represents a ‘year’. Instead of just 1,2,3,4,5 let’s expand this simple example to 3,4,5,4,3,2,1,2,3,4,5,4,3,2,1,2,3 (17 points) and have this duplicated a number of times as Brandon did so but this example which also averages to three has two ‘5’s and two ‘1’s to more represent three ‘years’ of records. Just as Brandon described if you are missing a couple of ‘2’ values and you just average the remaining real values you will get an average ABOVE the known three average. He was saying if you interpolate for the missing ‘2’s with a three on one side and a ‘1’ on the other you would artificially infill ‘2’s which would correct for their being missing. But what more often happens in the real data of hundreds of thousands of values is it is the ‘1’s that are missing… the snow is too deep, it is too cold to venture out to read the thermometers, or the station is merely covered by snow. In this case it is the ‘1’s that tend to be missing and the missing ‘1’s have ‘2’s on either side and when you interpolate you infill ‘2’s where there should have been ‘1’ and when you average there is an artificial warming of the mean value you would find.

    Form this factor even Steven’s calculations should be calculating warmer averages than is real.

    If you make this series large enough and have random missing values the average will always converge to ‘3’ as expectd as Steven pointed out, normal science when handling data with variances.

    I can see the ‘warmists’ currently jumping in action trying to marginalize the many questions Steven Goddard has raised but so far I have to lie on Steve’s side for he is highlighting the same factors I have found a few years ago and could not seem to get anyone to listen what was really occurring in the realm of the ‘adjustments’.

    If you simply estimate as a linear trend in the adjustment plots provided on the web by our government from 1940 onward and just ignore older adjustments prior to 1940 (minimal) and using HadCRUT4 you get something close to this plot for the more realistic view of the temperature time series: http://i39.tinypic.com/1118rnl.png. I found they average to about 0.75 °C/cy or 0.000625 °C/mo that is used for the correction in that plot. Keep in mind this is an estimate but is close if you month by month applied the monthly upward adjustments.

  8. tchannon says:

    wayne, that is close to the shape of the raw data for the longest contiguous US dataset in USCN

    Really the variation is so small as to be questionable.
    For example, averaging only refines if the errors are normally scattered around an absolutely accurate actual calibration. In reality the equipment is not that accurate and also should be provably that accurate. Given equipment changes, design, manufacturing the whole thing is iffy.

    In this case there is some wonderful documentation which might help untangle some mysteries., what is going on. Might be a blog item as an exercise.

  9. I was curious what people were saying on this repost so I decided to look before going to bed. I’m tired so I won’t write much, but I have to address what wayne said since he said:

    Brandon was in a way so wrong (or sly) in his simple example he posted

    Then went on to misunderstand the example in a simple way:

    First, his stack of 1,2,3,4,5 series is an assumption of a great warming trend in itself, ’1′ being cold, ’5′ being warm and of course if the missing data points are always in the colder one or two values you will compute a value warmer than if the missing ’2′s were actually recorded. If he would have chosen ’4′s as being missing instead of the ’2′s then a cooler than proper average would have ensued.

    wayne got things backwards. There was no warming trend in my example. My example had five records, each of which remained constant the entire time. That is, one station was always 1, another was always 2, and so forth. It would have been silly for me to do what wayne thinks I did as that’d make each station identical.

    Moreover, he refers to what would have happened if I “would have chosen ‘4s as being missing insteaad of the ‘2s,” implying I gave an incomplete picture of the effect I showed. However, I immediately discussed what would have happened had I removed different values. I just didn’t bother generating a new table of values because it’d be easy for anyone to visualize the difference.

    Not that either of those issues really matter. Demonstrating the point is trivial. If missing data is not randomly distributed, it will bias simple averages. The missing data in the USHCN data set is not randomly distributed, thus it will bias the simple averages Steven Goddard calculates. I think everyone should be able to agree to that.

  10. colliemum says:

    I find it extraordinary that the debate about these data has again been diverted towards why the adjustments, infillings etc are good and don’t really matter, and whose calculations are good and whose are bad.

    I have yet to see a justification for re-adjusting, up and down, historical data.
    Can someone please explain why these re-adjustments are justified? I may be naive, but when I studied science, fiddling with observed data was simply beyond the pale.

    So, all you modern scientists, please do tell this old-fashioned scientist why fiddling with observed data is now acceptable?

  11. kuhnkat says:

    I got as far as him claiming that using other stations to calculate replacement data is better than Steve’s method. A flat out assertion with no math to back it up and in fact no observational data to lend support. Part of Steve’s complaint is that warmer stations are being used to infill data!! Not addressed as it obviously blows up his apologia.

  12. tallbloke says:

    Kuhnkat: Agree, and this is the key point in my opinion. The number of stations not included in the network but still existing and available for ‘infilling’ has vastly increased since 1990. Local weather fluctuates much more than weather ‘averaged’ across a wider area. So the opportunity is there to be sneakily filtering ‘nearby stations’ to use the warmer ones to ‘infill missing data’.

    But they got so desperate that they had to start ‘infilling’ data that wasn’t missing, and this is what Steve Goddard has spotted.

  13. Gail Combs says:

    If you add what J W Merks says about the statistics of griding to what Steven Goddard found, it proves the entire temperature construct is completely bogus.

    The Birth of Griding (Krigging) and why it is wrong:

    Use and Misuse of Statistics
    Geostatistics: Junk Statistics by Consensus

    …Pearson worked with large data sets whereas Fisher worked with small data sets. That was what inspired Fisher to add degrees of freedom to Pearson’s chi-square distribution. Thus was born a feud between giants of statistics. Degrees of freedom converted probability theory into applied statistics, and sampling theory into sampling practice. Fisher and Pearson were both outstanding statisticians….

    Why did geoscientists get into geostatistical thinking? All it took was a young French geologist who went to work at a mine in Algeria in 1954. He measured associative dependence between lead and silver grades of drill-core samples. But he did not count degrees of freedom. So, he did not know whether his correlation coefficient was significant at 95%, 99% or 99.9% probability. What is more, his drill-core samples varied in length. As a result, the number of degrees of freedom is a positive irrational rather than a positive integer. He did not know how to test for spatial dependence by applying Fisher’s F-test to the variance of the set of measured values and the first variance term of the ordered set. His first paper was not peer reviewed. Nobody asked him to report primary data and give references. As luck would have it, he was without peers. Professor Dr Georges Matheron and his magnum opus were accepted on face value. His students thought of him as “creator of geostatistics”. Dr Frederik P Agterberg in his eulogy called him “founder of spatial statistics”. Yet, between 1954 and 2000 Professor Dr Georges Matheron did not teach his disciples how to test for spatial dependence and how to count degrees of freedom.

    …..I was asked to cite a specific reference for the quotation in which H G Wells spoke so highly about statistical thinking. I had found it long ago in Darrell Huff’s How to lie with statistics. Penguin Books published the first edition in 1954….

    Geostatistics messed up the study of climate change. Spatial dependence in our sample space of time may or may not dissipate into randomness. Sampling variogram shows whether, where and when it does. High school students ought to be taught how to construct sampling variograms. It would have made H G Wells smile.
    <
    About the Author: http://www.geostatscam.com/about.htm
    On the left are links to the nitty gritty of the statistical failings of Krigging (griding)

  14. Gail Combs says:

    OOPS forgot to include the blockquotes. The above is a quote except for first and last paragraphs.

    MORE:
    Look at the nuts and bolts of geostatistics

    Armstrong’s conundrum
    Each kriged estimate lost its variance
    Pseudo kriging variances shrink
    Smooth a little but not a lot

    Journel’s doctrine
    Assume spatial dependence
    Ignore analysis of variance
    Trivialize degrees of freedom

    Analysis of variance, one of the most powerful tools in mathematical statistics, is widely applied in science and engineering. Analysis of variance and degrees of freedom are as inseparable as Alice and Wonderland, and will remain so until our sun turns into a red giant. In the meantime, hold on to your geostatistical textbooks or buy a few before science fiction buffs do. Complete Statistics 101 before marching around in a mathematical cul-de-sac where Matheron’s gurus pound kriging drums and beat randomness by assuming, kriging, smoothing and rigging the rules of classical statistics…..

    ISO Technical Committees are in the business of developing internationally recognized and scientifically sounds standards. Yet, even the Bre-X fraud didn’t provide sufficient impetus to develop ISO Standards to protect mining investors. On the contrary, unbiased confidence limits for contents and grades of ore reserves remain as elusive as the variance of a single distance-weighted average-cum-kriged estimate, and the resource game is as risky now as it was during Bre-X’s glory days when salting and kriging conspired so conveniently and convincingly. What recourse do mining investors have when a mine fails to make the grade?

    Ironically, the world’s mining industry pines to do more with less but pays scant attention to mathematical statistics. Regulatory agencies rely on members of professional associations who may teach, practice or dismiss geostatistics. Institutions of higher learning such as McGill, Stanford, UBC, and scores of others, teach the kriging game with utter contempt for mathematical statistics. All that is necessary for the proliferation of invalid statistics is that professional engineers and geoscientists do nothing……

    So the CAGW Scam had its origins in a gold mine salting scam, how wonderfully appropriate!Fraud all the way down.

    The Bre-X fraud: http://www.geostatscam.com/salted_boreholes.htm

  15. Gail Combs says:

    More on Dr. Merks (It is interesting that Mikey Mann is called Dr Mann but Jan Merks is called Mr. Merks….)
    http://www.zoominfo.com/s/#!search/profile/person?personId=246611854&targetid=profile

    Looks like Merks has stirred up a real hornets nest but expect the usual circling of wagons to protect the guilty.

    “”Canadian shareholders should be furious…. This is the mining world’s equivalent of aggressive accounting… If you put inventories that you don’t have on the books, then it is just as wrong…. The companies, the investment bankers, the lawyers, they just don’t want to hear about it because it limits their ability to raise money. The industry doesn’t want limits.It wants room to manoeuvre.”

    “A majority of Canadian mining companies are using a form of statistical “geo-engineering” that permits them to inflate proven and probable reserves by up to 25%… if Mr. Merks is correct — and he has some high-profile supporters — it means the value of the deposits that underpin the share prices of this country’s publicly traded gold, silver, copper, platinum, nickel and coal mining companies may be inflated. It also implies that world inventories are lower than believed.”
    http://www.zoominfo.com/CachedPage/?archive_id=0&page_id=769104035&page_url=//www.sharelynx.org/gp/goldenpot-aaq.php&page_last_updated=2013-06-01T05:20:58&firstName=Jan&lastName=Merks

  16. kunhkat, tallbloke, it’s remarkable you two would agree I posted a “flat out assertion with no math to back it up” when just four comments up, wayne disucssed the math I used to back my claim up. I even responded by discussing the math two comments later, a mere two comments before you guys claimed I provided no math.

    The reality is I devoted something like ten paragraphs to addressing this issue, most of which dealt with the math. Heck, three tables, more than half of the visual components in the post, were devoted to the math.

    I don’t know how you guys came up with such a strange idea, but it’s rather disheartening you think the topic I spent the most time on my post wasn’t discussed at all.

  17. wayne says:

    Brandon says: “wayne got things backwards.”

    Well everyone, not exactly. I misread and took his grid of station readings as being column major and not row major as I now see where I missed that point in his words but really it matters not, but please ignore my comment of a trend existing, he had station temperatures never changing at all and that is also too simple of what really occurs in the data. However, Brandon is correct in showing that if missing values are below the true unknown mean the average of the remaining values will be too high… if the missing are above the mean the average will be too low but everyone with basic math already knows this so that stab at what Goddard (and many others) have found. There is no way to correctly adjust for these. I’ll uphold Goddard’s findings except in one small area with known bias which was the thrust of my comment above. It was said to have a few here think about that factor which also has no magical adjustment that can make if ‘correct’. Best to just realize it is there and leave them as ‘missing’.

    Bottom line, the adjustments graphs (such as http://i43.tinypic.com/s3m3wk.png http://i39.tinypic.com/1zfrn1l.png
    http://i40.tinypic.com/2uy2bg4.png) should vary about zero with a zero slope and they do not and consistently cool the older records as if people in past decades didn’t know how to read temperatures or all older thermometers all read the temperatures to high and this is blatantly false, wrong for NCDC, NOAA, GISS, HAD/UAE to sit back and somehow justify this by adjusting via homogenization, gridding, infilling, interpolation, times of observations, etc, etc. It’s so wrong, all of it, especially know knowing about 40% is being created from nothing physical.

    UHI is another one that is very real and it is the one ignored and if ever shown how it affects temperatures over time would warm the older readings or lower the present readings.

  18. A C Osborn says:

    Brandon Shollenberger says: July 3, 2014 at 4:38 pm
    I challenged on your own site and you blew me off.
    I am challenging again on this site.
    Have YOU actually looked at the DATA, if you have, do you still stand by only having the 4 points?
    Do You also stand by this statement “That means for Goddard to make his argument, he needs to argue all four points discussed in this post. For him to be right, he needs to be right about all four points.”

    Which is patently false, it is not just A Watts who has looked at the data, at least 4 other people have looked at the data and confirmed that it has major problems, one of which you do not even bother to mention. Continuous Updating of the same data, every iteration of the data has different values for many of the values.
    So he doesn’t have to argue any of your points, THE DATA HAS SHOWN TO BE CRAP.
    I have looked at the data, so now call me and the others liars or stupid or unable to do any analysis in public.

  19. KuhnKat says:

    Brandon, when you decide to start dealing with the actual issues and not playing Nick Stokes you will get a fair hearing.

  20. Sorry guys, but your responses show it’s pointless to respond here. Every response I’ve gotten has been argument by assertion or argument ad hominem. I’m sure you’ll disagree, but that’s how it goes.

  21. tallbloke says:

    Brandon: if you gloss over the biggest bits of the story, you can only expect to take some flak for it.

  22. tallbloke, if I had actually done that, I wouldn’t be surprised by the treatment. The problem is people can make that accusation in response to anything. Whether or not the accusation is true is irrelevant if the people making it don’t care to have anything resembling an actual discussion.

    For example, kuhnkat accused me of something that is unquestionably false. Anyone who has even skimmed my post would know it is false. Another of your commenters showed it was false by discussing exactly what kuhnkat (and you) say didn’t exist in the post.

    If that sort of thing is considered an acceptable response to a group of people, anything could be acceptable.

  23. A C Osborn says:

    Brandon Shollenberger says: July 4, 2014 at 12:13 pm

    Just stop bullshitting, you are the one who will not have the discussion so answer my Bloody question, have you looked at the DATA yet?

    Do you stand by your only 4 points when there are 6?

    The 2 you are ignoring are
    5. Continuous Updating with different values – go and find an excuse for that.
    6. The error you get by compounding all 5 errors as they all apply to the same values.

    As I said before Steve Goddard does not have to argue any of your points as others have proved the Final Data Output is Corrupted and adding non existent Trends to the data even more than admitted by NCDC.

  24. catweazle666 says:

    colliemum says: “I have yet to see a justification for re-adjusting, up and down, historical data.”

    Oh dear, you’ll never make a “climate scientist”!

    Because it doesn’t agree with the computer games climate models of course.

    Do try to keep up!

  25. Gail Combs says:

    ….Point 2 is a more commonly discussed point. According to an NCDC statement recently publicized, the data not used is data which fails quality control tests. That claim hasn’t been subjected to examination enough to determine if it’s valid, but it’s obviously understandable bad data may get discarded….
    http://hiizuru.wordpress.com/2014/07/02/laying-the-points-out/

    This is one of the key points.

    #1. Are the thermometers calibrated and what is the error?
    #2. Are the station sites inspected for adherents to GMP?
    #3. How often are the thermometers and sites quality checked?

    The answer to all those question is NO Quality Checks are done!
    SEE: Metrology:
    This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.

    Second point.
    How in heck do you determine if the data is “Bad Data” if no QC checks are done? I have seen temperatures drop from a nice balmy 70F to below freezing and snow in an afternoon. I have seen a small local thunderstorm drop the local temperature from 96 °F to 74 °F just this past week. However the station in the next town over would not show this temperature drop. So again how do you determine “Bad Data”?

    Just as an example right now it is 56 °F in mid NC. A couple days ago it was a max of 96 °F and this week it is supposed to hit 98 °F. So is that 56 °F “Bad Data” after all 31 miles north at RDU (Raleigh-Durham Airport) it is 62 °F so should my station’s temperature be “adjusted’ up? Given that the temps are almost always adjusted up by 1 to 4 °F, our buddies at NOAA seem to think so. This is despite the fact that my town’s station is new and in a rural area at a very small airport AND volunteer stations in the general vicinity agree with the station and not the busy Raleigh-Durham Airport International airport that is used as a match.

    NOAA has a very heavy thumb on the scale me thinks!

  26. A C Osborn says:

    Gail, it is the catch all excuse for them to adjust anything and everything they like, how can you argue, it is their algorithm that decides the “Bad Data”.
    But how likely is it that you have “Bad Data” every day for 36 months in a row?
    do they decide that one bad data point negates a whole month or a whole year?
    Science and Metorlogy it is not.
    I used to work in a Government Metrology Lab back in the 60s.