There are many serious problems with paleo proxy datasets.
In this case I am starting to untangle two different proxy for solar activity: 10BE ice core vs. 14C wood, akin perhaps to ice hockey, wooden stick meets puck.
The test datasets are Solanki and Stienhilber, where I have cited reference elsewhere, this being a quick proof of concept investigation.
A little while ago I noticed that between roughly 2000BC and today a cross correlation showed a straight line between -1 and 1, although the result depends heavily on how exactly this is done.
The data ought to correlate well if both are solar proxy.
I’ve pulled out a subset of the data 2000BC to 1900AD
One dataset is sampled every 5 years the other 10 years, the processing before we get the data is very different, so they are awkwardly different, but at least regularly sampled. (could be fixed up anyway)
First step is oversample to annual, which includes the necessary low pass filtering, taken further than necessary for the process, this knocks one data back towards the existing filtering already done before we get the other.
The data sets are then normalised, this makes them easier to work on.
Second was decimating to 10 year, result is two datasets with 1:1 sample points. Figure 1 shows the cross correlation for a period of a hundred years or so.
Following an informed guess, some trial and experimentation produces figure 2. One dataset has been decimated at 10.1 years, a sample rate conversion. This won’t be accurate but the 1% change has a very noticeable effect to the good.
Plotting, roughly lined up by eye
This result is nicely promising. [edit: correct plot is now shown, my mistake, tim]
So which dataset has a wrong timescale? I have no idea and do not trust either. Won’t go into detail on why here.
I now have to decide what to do next. I’ve been working towards creating a resampling method and whilst not used for this exercise I now have a very important core algorithm which is part of a whole core around which I can wrap one of my control loops. I think the largest obstacle apart from some ghastly coding (easy to say hard to do) is a useful merit function. Given merit, I am probably within sight of software able to crawl around the data optimising the sample points.
This might be interesting in relation to the insolation problem and very long climate cycles where all datasets are in a poor state. From what I have seen the orbital tuning attempts are not very good, wherein lie some tales. Can I do better? Very unlikely but unless I try I won’t know. Plenty of successes in the past.









Hi Tim, nice work!
“So which dataset has a wrong timescale?”
My guess is that the blue curve in fig 3, which is the Solanki 14C data I think, (May be worth labelling ), lags the Steinhilber 10Be data by around the solar cycle length.
The reasoning behind my guess is that forests create their own microclimates, which ‘cushion’ the environment of individual trees to help them ride through large short term fluctuations in solar activity. This might also explain why the 14C data doesn’t match some of the bigger amplitude short spikes in the 10be data. In general, I’m more inclined to trust the 10Be data as solar proxy, notwithstanding Vuk’s reservations about its geographical localisation, which the 14C data also suffers.
Have both datasets have been inverted to give a proxy for solar activity?
Often it is assumed 14C is time “accurate” except that is far from true. Quite likely this comes from knowing trees have annual growth rings and therefore counting. **
I think you are trying to read too much fine detail from a rough. If I decide to take this pair further I would hope more can be drawn from it. The underlying data does not resolve individual solar cycles (one of the datasets is filtered vaguely at 40y before publication) and anyway, 11 or 22 year, which kind leave a signature?
I’m of the same view that 10BE is likely to be a more direct proxy with problems.
Inverted? No and one of them before I normalised is published as a solar activity record.
Getting a relative match between two datasets might be extendible to between many, leading to the holy grail of find a feature which is orbitally clocked and therefore the whole lot can be shifted to absolute dated.
This is not as blue skies as it might appear because I have working code in a different domain, just an example of the usage of the generalised adjustment loop I evolved many years ago, doesn’t care what functions are inside.
** Unfortunately trees are known to sometimes produce two rings in a year and in historic data missing rings or growth periods are common.
Dating by nuclear analysis has a considerable uncertainty and that is the only way to date as a spot measurement. The alternative of matching ring patterns is problematic.
This says sufficient to put up question marks, see Calibration.
http://en.wikipedia.org/wiki/Radiocarbon_dating
Thanks Tim, great work, I admire your tenacity in pursuing subjects you find of interest.
I am by no means an expert either on the tree rings or C14. I assume C14 atoms are absorbed into the wood trough leafs during photosynthesis of CO2 and from the ground water content via process of osmosis. I am pretty certain that inner rings get good proportion of both long after they are formed. Also could one guaranty that the inner tree rings completely stop growing after they are formed?
Just a casual observation.
Tim:
Your Figure 3 is almost an exact match to the raw Solanki vs. Steinhilber plot, so I assume it hasn’t been adjusted in any way except for maybe a little smoothing. Is this right?
Yes, main difference is a 1% time warp. (assuming I grabbed the right plot)
Still wondering what to do next.
Tim:
A 1% time warp in one of the records would distort it by 40 years relative to the other over your 4,000 year time period. A 40-year displacement should be large enough to be visible when I compare your plot with the raw record, but I can’t see one. So maybe you did grab the wrong plot? (Been there, done that.)
Still wondering what to do next. How about a cycles analysis of Steinhilber 10Be to see how it compares with Solanki 14C?
Am I right in thinking that if the timescale of one series is wrong, then the decay time is different and so the measurements are also affected? Is this important?
Just finished my analysis of cycles in Be10 and C14 to compare and now see this article. Please see http://cyclesresearchinstitute.wordpress.com/2011/07/13/analysis-of-be10-records-as-a-solar-irradiance-proxy/
The cycles periods in the range 80 to 500 years are in pretty good agreement. Of course a year or two out in a few thousand years will not matter much for these purposes.
I too have had to try to deal with the different base of 5 or 10 year intervals between data points. CRI software CATS does not allow mixing different base, but we have a plan to allow conversion (not yet implemented).
Whoops, yes it was the wrong plot. (excuse I was too tired yesterday to even look)
FIGURE 3 HAS BEEN CHANGED
Changed it for an A-B
Model? Did that before the present exercise which is a consequence of what I found. The data is so different showing anything would cause confusion.
Tried a few other things as well but concluded the data needs fixing.
I might throw the shorter 2000BC to date subset at the software, see if that is less crazy.
An irregular timebase will have very bad effects. Clarifying slightly, non-linearity in time is a problem, merely a different sample rate is no problem. If there is an irregular time axis where the actual actual times are given that is not a problem either, if slower.
Roger: (answer Ray afterwards, we need threading)
I’ve quickly put together a sim of both based on published data 2000BC to 1900AD except I first normalised 10BE to Solanki 14C solar scaling (makes the result more comprehensible), both using 13 terms. The datasets show as very different including a major difference in faster detail.
No tricks, done straight on automatic.
Figure 4
If anyone is interested I can put together a live spreadsheet (just a tidied copy of what is here) and somehow make it available, wherein lie some difficulties with distribution. (**)
Live means it literally computes the result if anything is changed. (it does not create a new model).
With this you can see the copious numbers, all parameters. Could be edited but generally pointless.
Much more interesting are the on/off switches for each term.
You can also change the date origin and output sample rate, or for that matter drop in a column of decimal dates. (output is computed for a date, doesn’t care what it is)
The two input dataset are different, one sampled every 5 years the other 10 years and have slightly different starting decimal dates. (just don’t change the phase reference number)
Change one or the other to match, no problem.
You can also write new formula to do things.
The downside, read on
**
I use a spreadsheet as a GUI on data, for post processing, writing my own GUI is just far too major a task with today’s operating systems as well as turning deliberately very portable code into a nightmare.
Two problems: –
1, I use a very specific version of OpenOffice, all later versions are broken in fatal ways (so is LibreOffice).
2. Whilst some other spreadsheet packages can open OO Calc files, any plots do not survive well, if at all. Plots are exactly what is broken on later OO versions.
This means that SS transfer is best done by export to older MS .xls format, which is universal (including on ‘nix) AND NO PLOTS, you have to do your own. Can be ready to roll.
2a. WordPress does not accept OO Calc files (but does word processor files), WP does allow xls, can’t use data compression, archives not allowed. Alternatively I could make available on a server I control but space is limited and trust is more of a problem, you don’t know me.
Ray,
Not quite sure what you mean by decay time.
The ice core data says the timescale is ss09 which is one of many different scales. Not looked it up but 09 presumably means 2009. There are many papers on these timescales and to me they confirm ice moves, is unreliable, let alone the problem of compression.Assume the ice moves every which way, plus if precipitation is factored in, actually an unknown, goodnight.
The ~200y is problematic for a static decomposition at the same time as raising many important questions.
Any fairly clear definite periods point at a physical cause, noise cannot do that, although it could excite a resonator, including a semi-chaotic oscillator, still begging the question of why that period?
If the data was accurate I can split any extremely narrow lines if they are actually doublets, families are more difficult. This is most likely what you are seeing with the 200y modulation, I add, if it is real.
I’ve already looked and there is a suggestion of known orbital, in other words this might be manifesting from long term. Is that solar or an artefact from something else? No way of knowing.
I am fairly sure the 200y in your second figure is at least a doublet.
Point something out. If there was 206y and 208y
(206 x 208) / (206 – 208) = 21,000y and we know what that would point to.
Real effect? No idea. Seems too far out of from what we know.
Perhaps more interesting is the 86..88y, which shows obvious broadening. Given half that frequency is the solar system gravity repeat cycle it might be highly significant, moreover that is known to be slightly irregular… would broaden.
Amongst others this harks back to
With that data, given the 5 year samples was resampled after claimed 40 year low pass, simply omit every other sample. (pretty awful filter was used but there is very little below 30 years)
Tim:
The time warp reconstruction falls apart when you project it back before 4000BP. This is because the lead/lag between the two records isn’t constant. 14C leads 10Be by up to 30 years after 5500BP but lags 10Be by up to 50 years before 6000BP. The average displacement is 20 years.
Another point. With leads/lags this small we can get a peak-trough match over the entire 9,300 year record length by shifting one or both records sideways by only a decade or two. This doesn’t sound like a serious age-dating bust to me. In fact, when you consider all the potential sources of error the surprising thing is that the 14C and 10Be records match up as well as they do.
Not looked at the offset numbers but that possible (I thought some of the time errors were much larger) and yes it is surprising.
The timings match up well, which is great. The amplitudes not so much, maybe for the reason I stated earlier. But there is nothing regular enough about the ‘microclimate hypothesis’ regular enough to enable a standardized formula to translate between the datasets. A good reason not to trust treemometers I should think. 🙂
Tim, you might find our CATS software suitable for your purposes. Free download from http://www.cyclesresearchinstitute.org/cats/index.shtml
It has the ability to do calculations on time series, including regressions, factor analysis etc, and has a macro facility which makes repetitive calculations easy. It produces graphs like the ones in my articles. It allows a collection of timeseries to be processed together with a commo time base being any number of years, months, days, etc. It comes with a collection of a few hundred useful data files.
Tim, my question about decay time… When the researcher determines the amount of Be10 or C14 that was around in some past year, they have to allow for how many years have passed between then and now (now being when they sample it). Presumably they just apply exponential decay in reverse to do this. If they have the number of years wrong for a long section, then it seems to me that this will result in not only a horizontal displacement (as you picked up) but a vertical displacement as well, because all the exponential adjustments will be wrong by the same ratio too. Does that make sense?
Tim, yes, you can only split frequencies when they have about one whole cycle difference in the whole time period available. (MESA might allow a little closer) Sometimes you do get a bump that hints at finer division. Removal of a strong cycle can then allow the weaker neighbour to be seen. With a 207 and 208 year cycle, I am content at this stage to say a 207.7 year cycle exists or whatever. In my experience, a single cycle’s period often can be determined to 0.1 cycles over the whole period, e.g. if there are 50 cycles (as in 200 year cycle for 10,000 years say) then uncertainty is 0.1/50 or 0.2% meaning 200 year cycle period is uncertain by about 0.4 years. This guideline is a bit rough, it may be twice that good or half that good. I determined this from real data such as seasonal cycle components over hundreds of years in commodity prices – there are 12, 6, 4, 3, 2.4 and 2 month cycles with monthly data.
Ray says:
… then it seems to me that this will result in not only a horizontal displacement (as you picked up) but a vertical displacement as well, because all the exponential adjustments will be wrong by the same ratio too. Does that make sense?
It makes perfect sense, and seems to be something worth embedding into the analysis.
Tim: Could a chirp be done on the combined data to see which periodicities jump out? It seems to me that observed cycles in beach ridges, varves etc might help in calibration here.
Some time since I looked at the CRI site, software was if I recall mooted back then. Not now, spend some time later.
Decay time.
Not sure what the formal name for that might be and the name will vary with field.
I’m assuming the proxy acts as an integrating entity, if poorly.
At the moment I feel this is a minor issue, particularly because gross variation in [effective] sampling rate doesn’t seem to be the case.
Won’t write more, too much space.
Split close frequencies.
I can do that. The limitation is indeterminate, mostly dependent on data cleanness.
This is part of why I do not follow the well worn path, same tools and methods, same path.
I am doing FT using discrete input data, is not DFT, therefore binning does not apply.
Maybe, phase lock to one and remove, now do the next.
Useful results in practice are rare.
If you want to see that kind of thing an example which was done as an exercise as peripheral to helping out Paul Vaughan with something, is here
There are many close split items in that data, where in most cases the correct answer is 18.6 years, and that is what comes out, give or take. Useful? Only as confirmation of what ought to be there.
The huge question which has no definite answer is the process causing related items, I think particularly whether the process is simple linear or as a result of non-linearity.
Linear.
Example is the lunar gravity at say 9 days but is actually close spaced lines caused by the moon precession, Saros cycle, an additive process.
Non-linear.
No clear real examples come to mind. These tend to involve multiplicative, involve log and power non-linearity. If two frequencies drive through such a function the result is a wild farm of many items which very likely feed back in to produce even more.
One easy to understand kind of non-linear example is with polar ice. I tried to explain/demonstrate that here http://daedalearth.wordpress.com/2011/04/30/how-polar-ice-is-modulated-by-the-sun/
Key point: non-linearity produces frequencies which are not present in the stimulation.
Strictly the definition of non-linear is a bit wrong. How many pages do you want?
TB, lot of possible things and so on. Easy answers, no.
I’ve jury rigged a generalised resampling method, some fun math, even ended with Horner’s rule.
Does it work? Oh yes. Non-integer resampling. Is not complete but once a working example is put together the rest is just a long effort.
For the hell of it I dropped in the two datasets we are talking about, few twiddles and it works. Whilst with a lot of faffing around I could do the whole thing by hand, really rather pointless, knowing I could is all that matters.
There are many different ways of moving things, mind bending.
A warning: moving sample times *will* change the data structure so it is dangerous and the end objective must be kept in mind. Usage for correcting timing errors should reduce structure error but how can we know what is right?
Tim, regarding splitting lines. The method that I use in CATS is to determine the spectrum at closer intervals than FFT does. This is settable, and I usually use 1/4 of FFT intervals (can be set to 1/10 or 1 or whatever) which is just to find where the peaks lie between. the software then works to 0.01 frequency steps which is enough as the results are usually +/- about 0.1 steps. Very rarely real peaks do lie closer than 1/4 step, and one is missed. Setting the parameter differently will find these extra peaks. It can normally be spotted by a shoulder on a peak. A friend who has studied many methods and compared results of lots of programs finds my results to be very similar to DFT.
A 10Be record we haven’t considered is the Taylor Dome record, which goes back over 200,000 years, although the reading interval is too coarse to support cycles analysis. However, when we compare Taylor Dome 10Be with Taylor Dome 18O we find that the abrupt temperature increase at the beginning of the current interglacial about 15,000 years ago coincided with a five-fold decrease in 10Be concentration (from around 100 atoms/mg to current levels of around 20 atoms/mg).
Tentative conclusions:
1. The current interglacial was a result of a large and abrupt increase in solar activity.
2. The preceding ice age occurred during a prolonged period of very low solar activity. (A Mega-Maunder-Minimum, maybe.)
Comments?
Roger,
As I understand it and might be wrong. Please correct if anyone else knows better.
10BE accumulates within an ice layer.
The units given are 10BE atoms per unit mass of presumably ice.
The radiative flux which caused the 10BE has to be computed/approximated.
We can work out the depth of each ice layer and how deep it is in years.
Now try and compute the flux.
There are many problems with this. In this case we do not know how much the incoming snow has compressed toward completely sold ice. The dataset does say bottom is under relatively low pressure and is not solid.
If you compute the above I think you will get a surprise. Dreadful data but perhaps there is some meaning.
Tim:
It seems you have to jump through hoops to calculate solar flux from 10Be (see Steinhilber et al.

http://www.novaquatis.eawag.ch/organisation/abteilungen/surf/publikationen/2010_steinhilber.pdf.) But 10Be concentration is a solar proxy by itself (see http://upload.wikimedia.org/wikipedia/commons/6/60/Solar_Activity_Proxies.png.
The Y-scale on this graph is in 10^4 atoms/gram of ice and Taylor Dome 10Be is given in atoms/microgam.)
Anyway, my point in bringing the Taylor Dome correlation up wasn’t to provoke yet another discussion on whether solar proxy data are any good. I was just hoping that someone might notice that it provides evidence for a direct causative link between solar activity and ice ages.
Another relevant paper comparing 10Be and 14C results:
Click to access WagnerBeeretal01-205yrCycin10Be.pdf
Particularly with reference to the de Vries cycle (205 years) found to be of solar origin via geomagnetic proxies
popup fact:
All Mars retrograde stations repeat at a 205-year cycle. Currently the November 15 Mars retrograde occurred at 12° Cancer 26′. Looking back 205 years, Mars turned retrograde on November 15, 1802 at 12° Cancer 28′. Back another 205 years Mars stationed at 12° Cancer 20′ on November 14, 1597, and so on, looking back at 205 year intervals.
Nice factoid, Rog – thanks. The 200y(ish) de Vries cycle seems to be one of the main climate drivers, with periods of fewer sun spots causing a sea change for life here on Earth. Perhaps we will get to observe a different mode of solar activity over the next few years and I ponder that perhaps we will see more CME’s, flares and, perhaps, some super massive proton events when the sun loses it’s ability to disperse surplus energy via the mechanism of regular sun spots. If it happened, a SMPE could certainly cause problems for us and the rest of the biosphere.
Here’s a link to a graphic which explains how the observed Mars’ retrograde orbit occurs – takes a few seconds for the animation to start.
Perhaps it is an omen that Mars, the harbinger or war and chaos, is linked so closely to the de Vries cycle!!!
Here’s the promised link to the Mars retrograde orbit animation…
Nice little animation Tenuc, thanks for that. The thing is, the retrograde motion occurs at many different points round and round the zodiac wrt the ‘fixed stars’ before it re-occurs in the same place at the completion of the 205 year cycle. So an explanation of how it might be linked to a 205 year cycle evident in proxy reconstructions of Earth’s geomagnetism (and by extrapolation solar activity) is unclear. It does seem with the number of ‘coincidences’ in the solar system that if we keep on looking, we might find more 205 year celestial connections if we keep looking though. I suspect we might find some close subharmonics which beat with the 205 year cycle and modulate it over a period of 15,000 years or so, judging by the paleo geomag reconstruction in the paper I linked above.
The heliocentric or synod conjunction is in the middle of the retrograde motion section, when the solar wind down stream of the earths magnetosphere sweeps past Mars causing an inductive pulse and giant global dust storms.
Richard, interesting. Venus and Mars don’t have their own magnetospheres. Venus has one induced for it by the Sun. Maybe Mars gets one induced for it by the Earth when the alignment is right? I still don’t see how that would produce a 205 year cycle though, given that retrograde synod conjunctions take place at all points of the compass throughout the 205 years between precise alignments.
Good puzzle. 🙂
205 years/11 cycles =18.636 years or 22 cycles of 9.318 years close to the lunar nodal cycle?
parasympathetic oscillations of the inner planets with regard to drift relative to the Center of the Galaxy positions. Changes in the magnetic inductive effects of the strength of the solar wind some how tying all of these periods together via some interactive inductive effects?
I think it all comes down to interactive effects of orbital dynamics and electromagnetic induction effects that act through homopolar generator couplings to effectively spin orbit couple the dynamics into repeating patterns of observable solar activity and planetary weather dynamics on all planets in some form consistent with their individual atmospheric constructions.
Sounds about right to me, but Leif complains that the solar wind is way too weak to do it.
Nice tie-up with the lunar nodal cycle. How does Harald Yndestad’s ~75 year lunar – north Atlantic finding fit into it?
Roger, the possible abrupt change i solar activity at the interglacial is not entirely out of keeping with some theories. Firstly, standard Milankovitch theory does not work. It does not explain why sometimes the 41,000 year period dominates and sometimes the 100,000 year period. There is a guy who has a solar model which predicts a number of different solar modes and the periods come out as being roughly in proportion to 1/n^2 with n=1,2,3,4 etc. This gives quite close to the series 405K, 97K, 41K, 23K etc as actually observed. This has been published in a journal, only thing is I can’t remember his name.
Richard, 207.7 years / 18.61 years = 11.16, so not really an integer. I have gone through many possibilities with the 207-208 year cycle and I am pretty convinced that it has nothing to do with the planets. Even considering modulations from longer cycles, it cannot be produced as a side band.
Richard, let me clarify that – it is not based on solely planetary periods. It might be a planetaryb period interacting with a natural solar oscillation.
Ray, try to remember, sounds interesting!
Would the 207.7 years ‘cycle’ need to be a sinusoid to produce a peak in the spectrum? Could it be more like an ‘event’ every 207.7 years rather than a smoother meandering up and down? If so, might we be in for a set of big flares leaving a trace in the 10Be deposition data?
Natural solar oscillations (not driven by anything outside the Sun) are not in evidence according to Leif’s reading of the GONG helio-seismology data. Not that we have enough of that to know about long periods, but he says not even short period signals are in evidence. don’t know what to make of that really.
Ray it may be that not all signals are traceable to specific planetary interactions, and that would show the there are activities that are independent from planetary interactions, which is quite possible. I am not trying to tie all signals to one set of possible drivers like the CO2 crowd, just that by asking/asserting a relationship gets me feedback from people such as yourself to help define how much of what is to be considered to end up with the truth.
Stupid questions get fast answers (which is why I ask them) and stop me from wasting time tracking down dead ends, thanks for your input. I have been reading your work on the nature of cycle and harmonic interactions for many years, with out making many comments on the cycles group, I have much respect for your opinions,and have built much of what I post on my blog pages upon what I have learned from these many years of reading cyclic research.
Would it be too much to ask what you think of the concept of using a 6558 day repeating cyclic pattern of the inner planet returns coincident with the same period of 240 lunar declination cycles, and magnetic rotations of the sun, to reconstruct the average of the daily meteorological conditions on the earths surface, and present it as a forecast for the next cyclic pattern period?
Tallbloke, it need not be sinusoidal. Indeed as there is a 104 year cycle also in both Be10 and C14 we can say that it has some shape. Indeed, the CATS Wrap function shows that for Be10 it has one deep trough and a triple top. For C14 it has one deep trough and more of a double top. That means that the 208 and 104 years components are probably synchronized at the low.
Richard, I agree with you about questions – they are not stupid, just eliminating possibilities.
I don’t think that there is enough commensurability between all those things for that to really work. Ken Ring in NZ does long range weather forecasts based on a period of 18 years (or very close to that) and the weather people severely criticize him. He just uses an exact weather map from that long ago. I have heard that in the UK the weather does run down a groove (as it were) repeating some previous weather very accurately for many days on end and then suddenly jumps to a different groove. This may be a fruitful avenue if you can work out why it jumps grooves.
I can easily do lagged correlations of long data series, and in general there are no magic repetitions. But monthly sunspots numbers do get a quite high correlation of 0.656 after 2529 months (210.75 years). However the record is only 260 or so years long, so the overlap is not great. It might be a fluke if it were not confirmed by the long term 208 year cycle.
The periods and year of maximum phases of the 207 and 104 year cycles found from C14 are:
C14 207.7 years, max 2007; (104.3 years, max 1978)
Be10 206.8 years, max 1995; (104.7 years, max 1980)
So when compared to temperature, the 207 year cycle can be said to be rising throughout the 20th century, and falling throughout the 21st century.
Hmm, there’s that roughly solar cycle length ‘lag’ again.
Roger Andrews found a ~105 year cycle in SST vs SAT. There’s an article on the blog here somewhere he wrote.
Ray, that was a really interesting section in the pdf you linked on the modulation of cycles by their own harmonics. I’ll make a separate article out of that subsection if I may.
Not sure what you refer to Roger, but please go ahead.
My recent knowledge given to me by McCracken shows that the 10Be record uses the 14C record to base its timeline due to poor dating precision of ice cores. So one record is piggy backed on another with the 14C record also not beyond question. 14C relies on dendrochronology dating which is far from perfect, and I think I can show is at least 10% incorrect.
New research has uncovered the accuracy of the 4627 year cycle of the Jovians, this cycle will not repeat forever, but is of use over the Holocene. I have found that the outer planets return to their positions within 2 deg over 4627.25 years, which is very close in astronomical terms, and if the planet positions control the Sun, we should expect repeating patterns over the Holocene.
The LIA is the largest and deepest solar downturn of the Holocene, and if we go back 4627 years we should see the same? (aka -3155)
The planet positions of -3155 are almost identical along with the solar path but the solar proxy record shows extreme solar maxima? (-3155 is 4627 years from the LIA centre of 1472)
But if we go back another 340 years we find a similar LIA period. I suggest the dating method of the 14C record and beyond is bogus.
http://www.landscheidt.info/?q=node/323
This may be not quite “repeating patterns in physics” but getting close?
Thanks for the report Geoff. If we can get the solar-planetary theory onto solid ground, there’s a possibility we could use the theory to help recalibrate the proxy. Then the residual in the time axis dilations/contactions for different dendro series might give us some clues as to regional growing conditions.
Painstaking work, and still not very definitive once done.