Rain: Is this cause for concern?

Posted: November 12, 2014 by tchannon in alarmism, Analysis, methodology, weather

A few days ago Paul Homewood picked up an item where the Met Office seem to make a fuss about UK heat and wet, although Robert Ward seems to be the one fussing. Since I have have the precipitation data on hand what do I make of it?

More Misleading Claims From The Met Office” (7th Nov)

This cites Yahoo News.

I’ll ignore the temperature, here is the precipitious matter ‘since records began in 1910 it has been the second wettest.’ [1]
Presumably more than 194 million tons of water dropped on London according to calculations by the British Rainfall Organisation [3]

The Met Office ploy is add the data January through end October and make that a year by year spot value.

Image

2008 110.07
2014 104.47
1927 103.47

Independently reproduced above. Seems a strange measure so lets look some more.

 

Image

Readers may recall I deannualised and normalised this data to produce z-scores [4]. Here I have done two exercises, first on normalised and then plain on the “raw” monthly to show it makes little difference except for the key factor of now having dimensionless z.

In both instances I’ve plotted the Met Office running 10 month, in effect a crude filter, will offset 10 months to the right and a roughly decadal filter with end correction as I have often used. (full blown signal processing)

Nothing surprising appears in the compare. Decadals match r2 > 0.997, good enough for rough blog work.

So, low pass filtering does what it always does. [2]

What I do not know, statistics is not my field: –

  • Is it valid to filter normalised given the Met Office have done so for as-is?
  • Is the resultant z-score a fair measure of merit?

Filtered is < +0.31 for Oct 2014

Is that cause for concern?

And now I repeat previous warnings about the underlying data, UK data is particularly compromised by the inclusion of Scotland. I don’t want to expand on that here. (OFCOM due diligence opinion ought to be sufficient as a caution)

 


 

1. Full text “New figures published by the Met Office show the period from January to October this year has been the warmest since records began in 1910 while it has also been the second wettest.”
https://uk.news.yahoo.com/warmest-warning-issued-131014113.html#nEVTmC5

2. I could reduce the sampling to say 4 years reducing the sample count to a minimum needed to meet Nyquest yet reproduce the same shape. A lot about PCM is in my experience counter intuitive to many and outright disrespected by some, yet fully stated there is an exact regime which of course can only in practice be approached, never met.

Reproducing the shape requires a perfect reconstruction filter, could produce more samples but no more information.

3. British Rainfall, 1910, 454 pages, 33MB
http://www.metoffice.gov.uk/media/pdf/b/3/British_Rainfall_1910.pdf

4. Various posts on the Talkshop and my own blog.
This will do as a starting point.

Post by Tim

Comments
  1. rms says:

    Re filtering (and smoothing), see William Briggs, who is an expert, at http://wmbriggs.com/blog/?p=197

  2. tchannon says:

    I am familiar with some of what he has said but server error 500 at the moment, his site is down.

    I included note [2] as hint about that. I’ll not say more right now, see how things go.

  3. rms says:

    Link working from here.

  4. I do not think that averaging the rainfall for the country makes much sense although GB is very small – most of Europe (without Russia) can fit into Australia’s second smallest state (Victoria). I think it makes more sense to look at a smaller area -say Wales or Cornwall and look at cycles.
    Looking at the first graph by eye, it appears that there three peaks of high rainfall and three of low rainfall each roughly 40 years apart. Maybe looking at smaller areas it will be possible to pick out shorter and longer term peaks.
    In the area (say 10km radius) I live, from records going back to 1893 I can see peaks of higher rainfall and troughs of lower rainfall occurring about every 22 years. A trough occurs between 2 to 5 years after a peak.
    Some farmers mention seven good years and seven poor years. Others talk about eleven year cycles. I know when living in another state that we had a trough of low rainfall and bush fires that burnt around the house -fencing gone and some small outbuildings burnt, three times each separated by 10 to 11 years.
    Weather and climate can be very localised. I have seen it rain in our street 100 m away and nothing in the gauge on our lawn.. My daughter has had her house flooded when an official council weather station nearby recorded 450mm of rain in 3 hours. (gutters normally cope with about 50mm per hour and maybe 400mm/day)

  5. Stephen Richards says:

    I certainly remember when I live in East Anglia that rainfall or lack of it came in cycles of about 7 years.

  6. tchannon says:

    Briggs server is back. (following day here)

    That’s not the article I expected so I’ll have a read later on.

    [later]
    Briggs site is erratic. “500 – Internal Server Error”
    Looks like he is using Yahoo hosting, notorious for error 500. (you will find this in web search results, crawler bot got it 🙂
    Keep on retrying, got the page again

  7. Graeme No.3 says:

    cementafriend:
    I think you meant to say that GB would fit in the second smallest State (Victoria). Given that from the west of Scotland to the East of Germany is around 1200km. Europe would take up a bit more than just Victoria. Did you mean that it would fit in West Australia?

  8. tchannon says:

    em, I disagree with Briggs who I think is fooling himself.

    In the example he produces there is confusion between statistics and signals.

    This is a large subject which is very difficult to explain to people in an opposite field, so bad I don’t know how to start explaining because there are so many related facets.

    I met just one person who probably knew all the theory well enough.
    Now here is a surprise, he left me a copy of a paper after I showed him for real what he had never seen, I had a first production prototype on the bench, he turned up out of hours. I seem to have remembered his name, web search, actually found him
    https://uwaterloo.ca/audio-research-group/people-profiles/stanley-p-lipshitz

    In his example case Briggs does not have the randomness he assumes **in the context he then applies**. A sub-domain.

    If you create 100 Gaussian random sample values and then reduce the number to 4 by decimation that is what you have left even if you use 100 points, ie. omit the actual removal of 96 values. Done perfectly these old datapoints contain A+B but the new points contain B only.

    It is not that simple. There is a trade between time and amplitude, the “removed” points did have an effect (contain B), are now encoded within the 4 point values as greater numeric precision than the original. (related to this is how eg. “1 bit” data converters work, run very very fast and use filtering to trade time for more precision)

    I hazard a guess that from a statistics point of view the p value or whatever should be computed taking into account all quantisation.

    Statistics doesn’t not usually have a concept of Nyquist yet here we have to face both Nyquist and Shannon, there is a window of correctness within which there is a precise representation of what is a truly analogue quantity. This is still conditional and an approximation.

    This is partially about reducing data as is always done by eg, the Met Office and usually improperly (Nyquist). If they are doing that how far should it go, hence I take it a step further to decadal.

    Related is the original data reduction, in this case a gauge read at some arbitrary rate which can be written down. These various figurings have been rate reduced and so on until we are given one value a month at up to 4 digits. Briggs wants the raindrops. (I agree but makes no material difference if the prior reduction was done well)

    Anouther point is trying to break the usage of buckets of water as a means to assert directly or indirectly a point about supposed human factors.

    And also turn this into a dimensionless value which does have a basis in whether it is significant.

    In hydrology rain seems to start as like 1/f but on averaging many spatially turns Gaussian. This is I suspect much the same problem as the above.

    Make head or tail of the above if you can.

  9. tchannon says:

    Ongoing significant server outages, reports of 500, no response etc. doubleclick,net one causing it. JoNova has had problems, affecting Oz too.

  10. A. Ames says:

    tchannon:

    Looking at your excellent charts, as an industrial scientist who has looked at rooms full of time series data I would strongly reject the assertion there is something different recently.

    As a general rule, Briggs is right. One cannot arbitrarily average samples of a time series.

    It is almost always useful to find the auto-correlation function, AC. You can get significance for the AC by using the same data multiple times with random ordering. If the data is not time correlated, you can do random reordering to test different statistics. (On a spread sheet I sort the data against a random variable.) Some games are possible when you know what the AC is.

  11. tchannon says:

    That’s my take AA. The only cyclic part of the original data is annual, removed from the z stuff. This will have not effect on the decadal which filters it out anyway.

    The opinion seems to be it’s not valid to filter data at all, yet the original data is crudely filtered, how it got to monthly and one value for a large area. The z data is giving the same result, just a unit change.

    I’m left with the impression I’ve had in the past when I step past mass everyone does the same thing. Sometimes I am right sometimes wrong. Historically I would go and do for real or fail but here there is no real world proof.

    Perhaps my intent is unclear, alarmed some. This has nothing to do with filtering to “improve” a match, simply about clarifying visually by removing the visual effect of a large amount of fast noise. What appears is a small longer bias, actually what warmists like to say is present but the amount is I think much too small and inconsistent with claims. Maybe particularly the 1970s hint at other things or simply things move around a little.

    As a justification to mess with other people’s lives, no.

  12. A. Ames says:

    tchannon

    Even if there is something significant , I’m not sure it would justify messing with people’s lives!

    Since we are talking about time series of rainfall, we should make note of the classic discussions of Hurst by Mandlebrot. A good introduction is here: http://www.bearcave.com/misl/misl_tech/wavelets/hurst/index.html#Why.

    For some meteorological data the correlation is high enough that even 100 years is not enough to get to 95% significance. Koutsoyiannis has multiple discussions of the Hurst-Kolmogorov (i.e. weather) statistics. https://www.itia.ntua.gr/en/docinfo/1001/ will start.

  13. tchannon says:

    Good stuff.

    First link take is from a fiscal perspective. I am uncomfortable accepting this as more than incidental since human systems are open to catastrophic control, upsetting what might be if left alone. There again…

    I am familiar with Koutsoyiannis, been mentioned on the Talkshop and as an engineer the 1/f matters for me go back a long time. It is relatively recently that I discovered Mandelbrot had touched on this even though I have one of his popular hardback books and it was in hydrology, so it was novel seeing the same thing with different takes in different fields.
    (if you go to my blog I’ve left a minor post as front page which links to http://www.itia.ntua.gr)

    Trouble is that going out of mainstream leads to blank stares and turn off.

  14. Phill says:

    Is this data accurate? Lets say it is…roughly..

    Is this data normally distributed? On a yearly basis it is… roughly.. There is actually a small positive skew. On a monthly basis it isn’t. The monthly data shows positive skews with long tails of rainy months particularly over the cooler season. For this monthly data, assuming that the data is normally distributed will underestimate the maximum rainfall that can fall.

    A positive skew is maybe an indication of a multi-modal distribution. What I suspect is that you have are more than one basic weather pattern, depending on such things as the state of the NAO and the position of the jet stream. You really need to have normal curves and z values for each separate pattern.

    Looking at the data back to 1766 there actually seems to be a small number of very rainy years (over 1200mm) just where you would expect a normal curve to be disappearing to nothing. Clearly these years represent some special case, perhaps the jet stream stuck over Britain and bringing a conveyor belt of Atlantic storms.

  15. A. Ames says:

    tchannon and others

    Getting one’s head around fractals and HK processes is not easy but is well worth the effort.

    Before dismissing fractals as mere mathematical curiosities, note that Feynman in his book on path integrals showed that the quantum mechanical path of a particle is a sort of fractal in momentum-position space. Fractals have deep roots in physics.

    Click to access 2010IMSC_ThingsNotToBeForgottenSM.pdf


    connects fractals to climate data.

    The section on Hurst’s analysis of Nile flow in the bearcave reference above ties them
    to rainfall and river flow.

    That “climate science” seems ignorant of H-K dynamics suggests serious insularity of its practitioners. Mandlebrot’s first book came out in the 1970’s and his lectures were all standing room only so it was hardly a secret.

    Cheers. aa

  16. tchannon says:

    Phil, the normalisation includes annual removal and adjustment of skews etc., see older works. Comes out pretty much normal even though the inherent wet/dry process is highly asymmetric, doesn’t rain dryness. We don’t get prolonged anything extreme unlike some parts of the world.

    GAV doesn’t say much for these Met Office datasets. It might if less area averaging of data was used as well as longer data. R.J.Oosterbaan’s site might interest some
    “As a land and water management specialist at the International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands, I have worked since 1965 in almost 30 countries in Asia, Africa and Latin America, giving training courses, participating in research programs and providing project advice as a consultant. In 2002, I left ILRI to be independent.”

    http://www.waterlog.info/

    The country edges on maritime climate and continental, so yes there is a lot of effect from direction but also the wander of the the defined general circulation edges. An article will appear soon updating a previous work where this can be seen. (ready but a brief actual text has to be done)

  17. tchannon says:

    AA,
    Snippet, CET, an artificial but long monthly dataset does seem to have an exponent of 0.7 but none of the Met Office precipitation datasets I’ve examined using Selfis do. (perhaps with preprocessing)
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.124.1697
    Java application is available.