On noticing Spencer has posted on WUWT about the latest UAH TLT global temperature, usually some days before gridded data is published, I did a very quick shufty, noting the new figure, then thought about updating the gridded I use here, then remembering not just yet.
However, I recall saying somewhere I expected a ramp up. Found the data directory, sorted by date, loaded a file to remind myself where I left things.
As a reminder
Left as is, transparent because I must have overlaid as a check.
Head figure shows two models, one where most recent data was withheld, the other using all, one of the things I do to give some idea on the sanity of extrapolation. (on checking, says to 2010, last data value December 2009)
Spencer’s report of .202, .206, .506 fits well enough with the model.
Why does this seem to work?
What I have noticed is how well sampled data of this kind has a pattern which has underlying structure. Any rapid deviation from this is hard. Really though I have no idea why. My assumption is it is the result of characteristic periods in some earth system interacting.
Biggest problem is when the dataset is changed, has happened many times, some without version number change.
Very little change.
Updated to Jan 2013. Filtered rises very slightly at the end.
Updated to Jan 2013, filtered, model but not withheld version.
Future? Few points up there then down a bit, by then the extrapolation will know more data. Few months forecast is fine.
Yes you can hold me to this later. Better to show then always act safe.
h./t to WUWT
Post by Tim Channon