1 Introduction
Radiometric dating is a series of techniques used to date material containing radioactive elements (Gunten et al., 1995) which decay over time. These techniques use the radioactive decay equation (, where is the quantity of a radioactive element left in the sample at time , is the initial quantity, and is the element’s decay constant) to infer the age of the material being investigated.
(lead210) is a radioactive isotope which forms part of the (uranium) series. (solid) is contained within most rocks and decays into (radium, solid), which later decays into (radon, gas). Since is a gas, a proportion escapes to the atmosphere where it decays into (solid) which is later transported to the earth’s surface by precipitation. deposited this way is labelled “unsupported” or excess (). On the other hand, which decays in situ becomes what is labelled “supported” (). By distinguishing between supported and unsupported one can determine the age of the sediment through measuring the at a depth and compare it to the rest of the sediment.
The sediments within lakes, oceans and bogs contain layers of biotic and abiotic fossils, which can be used as indirect timeseries of environmental dynamics as the sediments accumulate over time.. Whereas both unsupported and supported decay over time, supported is replenished through decay from radon contained within the sediment. That is why the concentration of supported remains at equilibrium while that of the unsupported decreases and eventually reaches zero. This is the basis for dating. Given its relatively short halflife of 22.3 years, has been used to date recent ( yr old) sediments. This time period is of particular importance when studying the effects of humans on the environment. Unfortunately, the current dating models were not created within a statistical framework. This has given place to a series of uncertainty approximations (Appleby, 2001; SanchezCabeza et al., 2014). Here we introduce both a new treatment of data and a dating model created within a statistical framework, with the objective of providing more reliable measures of uncertainty.
2 Modelling of data
As outlined above, within sediment is naturally formed from two sources – from surrounding sediment and rocks containing (supported), and from the atmosphere through (unsupported). Modelling these two sources of is crucial to the development of agedepth models. Since supported and unsupported are indistinguishable from each other, in order to model both sources, we have to make assumptions depending on the measurement techniques used. Measurements of can be obtained by alpha or gamma spectrometry. The latter technique also provides estimates of other isotopes such as , which can be used as a proxy of the supported in a sample (Krishnaswamy et al., 1971).
2.1 Supported
If gamma spectrometry is used, supported can be assumed to be equal to the concentrations of . When the sediments are analysed using alpha spectrometry, measurements are not available and estimates of the supported activity can only be obtained by analysing sediment which reached background (samples which no longer contain unsupported ). When alpha spectrometry is used, a constant supported is assumed. These two different ways of inferring the supported activity can be formalised by the following equations:
(1)  
(2) 
where is the total , is the unsupported and is the supported in sample . Depending on the site and availability of measuring techniques, one of these equations can be used to differentiate supported from unsupported .
2.2 Unsupported
In order to model the unsupported , some assumptions have to be made regarding the precipitation of this material from the atmosphere. A reasonable assumption for this phenomenon is the constant flux or rate of supply (Appleby and Oldfield, 1978), which implies that for fixed periods of time the same amount of is supplied to the site.
Following Appleby and Oldfield (1978), the assumption of a constant rate of supply implies that the initial concentration of at depth (which is linked to age by a function ), , weighed by the dry mass sedimentation rate , is constant throughout the core:
(3) 
where is a constant. The dry sedimentation rate is the speed at which the sediment accumulates, weighed by the sediment’s density at such depth, i.e.
(4) 
where is defined as the density of the sediment at depth and is the rate at which the core accumulates with respect to time. Considering that the relationship between depth and time is expressed by the function , then is the inverse function of time, and since is a onetoone function
(5) 
Since is a radioactive isotope it follows from the radioactive decay equation that
(6) 
where is the concentration of unsupported found at depth , and is the halflife. Using equations (3), (5), and (6) the following relationship is obtained,
(7) 
Considering that is measured over a slice or section of the sediment, this relationship has to be integrated over such section to be related to the corresponding measurement, that is,
(9) 
where are the lower and upper depths of the sample respectively and is the activity in section . Equation (2.2) provides a link between the agedepth function and the unsupported activity in a given section. This is the primary tool to construct an agedepth model based on a constant rate of supply.
3 Current approach
The Constant Rate of Supply (CRS) (Goldberg, 1963; Robbins, 1978; Appleby and Oldfield, 1978) model is the most commonly used dating model. It uses the constant rate of supply assumption presented in section 2, and the following equations to obtain a chronology:
(10)  
(11)  
(12) 
where is the remaining unsupported activity below , and is the unsupported activity in the whole core. The CRS model can be summarized by equation (12) and from its term one can deduce that this model depends strongly on measuring activity throughout the whole core. The effect of wrongly estimating this variable is described in Appleby (1998)
. If the activity cannot be measured throughout the entire core, interpolation is suggested
(Appleby, 2001). Moreover, if the lowest sample has not reached background, and thus still contains unsupported , extrapolation is suggested.Because this model is based only on the unsupported activity, precise estimates of supported are necessary in order to obtain reliable estimates of the unsupported . Depending on the equipment used to obtain the concentrations, and on the model used to distinguish supported from unsupported , this could be problematic. Wrongly estimating this variable will directly impact the estimate of which will in consequence affect the resulting chronology.
3.1 Example
To show the results of the current approach and later compare them to ours, data obtained from a site in HavreStPierre, Quebec, Canada will be used. The core (HP1C) was obtained in July 2012 and was analysed using alpha spectrometry at Exeter University, UK . Table 1 contains the data from core HP1C. As previously mentioned, alpha spectrometry does not provide estimates of as is the case for beta spectrometry, but instead, contrary to the latter, it can measure far smaller quantities of . To date this core, the CRS model was calculated using the recommendations in Appleby (2001).
One of the first steps to apply the CRS model is to identify the supported . For this purpose the last 4 samples were averaged to obtain an estimate of
and a standard deviation of
for the supported activity. This value was subtracted from the total for each sample, to obtain estimates of unsupported activity. Following Appleby (2001) one can obtain the dating shown in Figure 1. This methodology requires very strong assumptions regarding independence, given the fact that it uses accumulated activity as the primary tool for inference. We now introduce our formal statistical approach for dating, to solve this and several other issues inherent in the usual CRS technique just described.4 A statistical approach to dating
Let the concentration of in a sample taken from section
be a random variable
. In this case, it is important to clarify that this is not the cumulative concentration or activity, from the surface to depth , but rather it is the concentration found from depth to where is the sample’s thickness. Each concentration of () is measured separately and therefore it is safe to assume that each sample is independent of the other measurements and is normally distributed with mean the true concentration
, and variance as reported by the laboratory:
(13) 
As mentioned above, the supported is assumed to be in equilibrium throughout the core, which means that it remains constant through all depths. If measurements are available, a supported value per sample can easily be included by letting be different at each depth. It is important to note that this will greatly increase the number of parameters, and should only be used when the hypothesis of a constant supported concentration has been shown to be unreasonable. If a constant supported is valid, then we can use equation (2) to infer the supported .
Now, assuming a constant rate of supply, as described in section 2, for the unsupported , the activity in sample can be obtained as follows
(14)  
Since the supported can be assumed to be constant, the supported activity of sample is
(15)  
By defining
(16) 
It is important to note that the activity at each sample contains not only the information of ages but also of the supported () and the initial supply of unsupported () throughout the core.
To implement a Bayesian approach, prior distributions for each parameter have to be defined. Appleby (2001) suggested that the supply of unsupported has a global mean of . This can be used as prior information to obtain a prior distribution for . Because
is always positive, a gamma distribution can be considered and with this information we can define
. On the other hand, since supported () varies much from site to site, defining an informative prior distribution for is in general not possible. As a consequence, data for the supported () is necessary . These data can come from two different sources; estimates or measurements which reached background. A prior distribution for the (supported ) associated with these data is necessary. Little is known about this parameter prior to obtaining the data. We have seen cores ranging from nearly up to almost of supported . With this information, a gamma distribution with a mean of and shape parameter of would allow the data to contribute more to the posterior value of . Lastly, to define a prior distribution for the ages an agedepth function has to be defined.4.1 Agedepth function
Since sediment cores can extend back thousands of years, is not the only technique used to date them. (radiocarbon) is a common way to obtain age estimates for organic material. The radiocarbon community has built sophisticated chronology models, which rely on equally sophisticated agedepth functions, with the objective of reducing and better representing the uncertainty of the resulting chronology. Because we want our approach to have the flexibility to incorporate other dating information such as radiocarbon, we decided to incorporate a wellestablished agedepth function.
Bacon (Blaauw and Christen, 2011), which is one of the most popular chronology models for dating, assumes linear accumulation rates over segments of equal length. By using this same structure, agedepth models based on multiple isotopes could potentially be obtained. This is the agedepth model we are going to use and we discuss the general construction of the Bacon agedepth function next (see Blaauw and Christen, 2011, for details). The agedepth function is considered as linear over sections of equal length, causing depths to be divided into sections of equal length , noting that . Between these sections linear accumulation is assumed, so for section the model can be expressed as
(17) 
where are the slopes of each linear extrapolation, and is the length of each section.
With this structure a gamma autoregressive model is proposed for the accumulation rate of each section,
where and is a memory parameter which is distributed prior to the data as .Using the above agedepth function, and (16), the loglikelihood for the model takes the form
(18) 
This likelihood contains all the parameters needed by our approach. Using the prior distributions previously mentioned, a posterior distribution is defined, from which we intend to Monte Carlo sample the model parameters using MCMC. To allow for faster convergence of the MCMC, a limit to the chronology is considered. This chronology limit is inspired by the dating horizon, which is the age at which samples lack any measurable unsupported .
4.2 Chronology limit
The dating horizon was described by Appleby (1998) to be 100  150 years, based on the available knowledge and measurement techniques at the time. The dating horizon of a given core is affected by different factors. The first of them is the equipment used to measure the samples. If certain equipment has higher precision than another, it will be able to distinguish unsupported from supported down to deeper samples and thus provide ages further back in time. The other factor that affects the dating horizon is the quantity of initial unsupported , which is directly affected by the rate of supply (). When there is a larger initial unsupported it will take longer for the unsupported in a sample to become indistinguishable from the supported .
We therefore decided to set a dynamic chronology limit for our method. This limit () will be determined by two factors – the rate of supply of to the site () and the error related to the equipment used to measure the samples. For example, lets assume that the equipment used to calculate the concentration of in a sample has a minimum error of . Now, assuming that the sample comes from a bog with a density ranging between to (Chambers et al., 2011), then once the unsupported activity in a sample reaches , it would become indistinguishable from the supported activity. This information could help us to calculate the dynamic age limit. By using equation (14) we have
(19)  
where is the minimum distinguishable unsupported activity in a sample related to the equipment’s error, is the supply of to the site and is the decay constant and considering that is a constant,
(20)  
It is important to note that this limit depends on the error of the equipment and on the origin of the samples, which are factors known prior to obtaining the data. Moreover, is a variable of the model. This will allow the model to limit the chronology given .
4.3 Implementation and MCMC
Blaauw and Christen (2011) propose the use of a selfadjusting MCMC algorithm, known as twalk (Christen and Fox, 2010), which will facilitate the use of these techniques to nonstatisticians. The twalk algorithm requires two initial points for all parameters () and the negative of the log posterior function which is called the energy function,
(21) 
A program (in python 2.7) called Plum is used to implement this approach and to obtain a sample from the posterior distribution. Plum has been tested on peat and lake sediment cores, as well as on simulated data, providing reasonable results with no tuning of the parameters needed; examples of these results can be seen in sections 5 and 6. The consistency of these results, with minimal user input, show how the twalk (Christen and Fox, 2010) was a suitable choice for this implementation.
5 Model comparison
To implement our approach to the HP1C data presented in section 3.1, Plum was programmed to use the last 4 samples from Table 1 as estimates of the supported activity, using the rest of the samples to establish the chronology. Figure 2 shows the results of the CRS model in red and our approach in black and grey. From this comparison we can observe that both models agree with each other down to a depth of cm, at which point the CRS model continues at a similar slope unlike our approach which provides younger estimates. This uninterrupted growth of the CRS model can be explained by its age function which is a logarithmic approximation, invariably tends to infinity as unsupported reaches 0 . Even with these discrepancies both models have overlapping confidence intervals, with our approach providing a more precise chronology in the topmost part and a more conservative estimate for the deepest part of the core.
This example shows the potential of our approach in a ‘wellbehaved’ realworld case study, but unfortunately we cannot observe the precision of this approach when confronted with more challenging data sets, such at those which did not reach background and/or with missing data. For this purpose, several simulations were created where we know the ‘true’ chronology and can observe how our approach behaves in more challenging circumstances.
6 Simulated Example
To obtain simulated data, a constant supply of was defined as , and by using the constant rate of supply assumption from equation (3) we have . At this point, we can define to obtain by using equation (5) and the age function .
(22)  
(23) 
Using these functions, simulated samples at any given depth can be obtained by integrating each function between the top and bottom depths of the sample. Lastly, to simulate supported a constant value was added to the simulations such that , where and are the top and bottom depths of the sample. For this simulation we set the supported to . To replicate the measurement errors related to the concentration of
, white noise was added such that
where is the concentration found in sample and . This exercise provided us with the dataset in Table 2. We use this simulated data set to test the precision of our approach in various circumstances. For this purpose, the last three sample points were designated as estimates of the supported .The best scenario for agedepth models is when every core section is measured, from the surface to where background is reached. In this scenario any approach should reach the best results, thus providing the complete information about the decay of unsupported . This scenario can be simulated using the complete data set from Table 2. Figure 3 shows the comparison between the chronology obtained by our model and that of the CRS Appleby (2001) alongside the real age function, and how both models include the true chronology in their 95% intervals. By applying our approach to this scenario, we obtained a very accurate chronology by taking the mean of the MCMC simulations. This shows, unsurprisingly, that our model behaves quite well in the bestcase scenario. On the other hand, the CRS model provides a shorter chronology, because some samples had to be discarded from the chronology. This is a direct result from the logarithmic approximation mentioned in section 5. In this particular case, the two bottommost samples had to be discarded since the last sample was smaller than the mean of the three samples used to calculate the supported activity. On the other hand, CRS estimates younger ages for this example, which can be a result of the underestimated supported as can be observed in figure 3. Another feature of the CRS worth mentioning is the rapid growth of the chronology in the last sample. As previously mentioned, this rapid increase can be attributed to the logarithmic approximation the CRS uses.
The following scenarios deal with the behaviour of our model in circumstances where there is not complete dating information. Even if we attempt to use the CRS model to provide age estimate in these scenarios, it does this by interpolating and extrapolating in the sections where there is missing data. Applying the CRS model to these simulations would require us to take several additional heuristic decisions with large potential impacts on the chronology (e.g., exponential or linear extrapolation to beyond and/or between the dated levels etc., see
SanchezCabeza and RuizFernández (2012)). Such comparisons lie outside the scope of the present work but will be explored in a future study and consequently for the next examples we only study the performance of the Plum chronology.Sometimes researchers do not have the funds to obtain a full, continuously measured dataset for the chronology that they want to build. When this is the case, only certain strategically placed samples are measured. To simulate this scenario, only the data at odd depths was used to obtain the chronology. Figure
4 shows the results from this experiment. The accuracy of the model did not change as it still gives an accurate estimate of the true age model, and the precision was not greatly affected even though only half of the available data was used to calculate this chronology.A common problem in dating is not reaching background. To observe the behaviour of our model in these circumstances, the bottommost seven data points were removed leaving us with a dataset which has not reached background. Figure 4 presents the resulting chronology compared to the true age function. The chronology seems to be accurate down to a depth of 16 cm, from which point it seems to provide older estimates. On the other hand, the model encloses the true chronology at all times even for the older ages.
The last scenario to which our approach was tested is missing topmost sediment. For this example, the data points with a depth of two to ten cm depth were removed leaving us with a data set with topmost missing data. Figure 4 shows the results of this experiment. Even with a third of consecutive missing data, the model is able to accurately reconstruct the true age function.
Our approach behaves well in every tested scenario, as its accuracy is not greatly affected by any of the different scenarios we introduced.
7 Discussion
The approach developed here presents a more robust methodology to deal with data. The advantage comes from a more realistic measure of uncertainties, since the ages are parameters which are inferred in the process. Moreover, dealing with missing data, which is a common problem when dealing with dating, becomes easier because our model does not need the whole core to be measured to obtain accurate results. Also, since the CRS model relies on a ratio, that approach requires removal of the bottommost measurement. Since our methodology does not rely on a ratio, all the samples provide information to the chronology, making longer chronologies possible.
Because of the integration of the supported into the model a posterior distribution of this variable can be obtained, as well as for ages at any given depth (not just those with measurements) and the supply of to the site. Figure 2 shows the posterior distributions of the supported and the supply of unsupported . These posterior distributions provide more realistic estimates of the uncertainty of these variables, which may be used for other studies where the main focus is not the chronology but other aspects of the site.
Another advantage of this methodology is the fact that since the model operates within a Bayesian framework, incorporating extra information is possible without having to ‘doublemodel’ by using previously modelled ages within an agedepth model. This information could come in the form of other radiometric ages, such as radiocarbon determinations. Since measurements of radiocarbon and , given the age, are independent, the overall likelihood would consist of two parts; the likelihood from and from . Therefore,
(24) 
Considering that the only link between both data is , by using the same agedepth function such as that from equation (17), a chronology with both sources of data is possible. This becomes very important because the calibration curve (Reimer et al., 2013), which is used to correct the radiocarbon ages, is nonlinear for the most recent few centuries, causing problems with interpreting radiocarbon ages. This period is partly covered by . By combining these two methodologies, more robust chronologies can be obtained for this important period in human and environmental history.
References
 (1)
 Appleby (1998) Appleby, P. G. (1998), “Dating recent sediments by Pb210: Problems and solutions,” Proc. 2nd NKS/EKO1 Seminar, Helsinki, 24 April 1997, STUK, Helsinki, pp. 7–24.
 Appleby (2001) Appleby, P. G. (2001), “Chronostratigraphic Techniques in Recent Sediments,” Tracking Environmental Change Using Lake Sediments: Basin Analysis, Coring, and Chronological Techniques, pp. ”171–203”.
 Appleby and Oldfield (1978) Appleby, P., and Oldfield, F. (1978), “The calculation of lead210 dates assuming a constant rate of supply of unsupported 210Pb to the sediment,” Catena, 5(1), 1–8.
 Blaauw and Christen (2011) Blaauw, M., and Christen, J. A. (2011), “Flexible paleoclimate agedepth models using an autoregressive gamma process,” Bayesian Analysis, 6(3), 457–474.

Chambers et al. (2011)
Chambers, F. M., Beilman, D. W., and Yu, Z. (2011), “Methods for determining peat humification and for
quantifying peat bulk density , organic matter and carbon content for
palaeostudies of climate and peatland carbon dynamics,” Mires and
Peat, 7, 1–10.
http://miresandpeat.net/pages/volumes/map07/map0707.php 
Christen and Fox (2010)
Christen, J. A., and Fox, C. (2010),
“A general purpose sampling algorithm for continuous distributions (the
twalk),” Bayesian Anal., 5(2), 263–281.
http://dx.doi.org/10.1214/10BA60  Goldberg (1963) Goldberg, E. D. (1963), “Geochronology with Pb210,” Radioactive Dating, pp. 121–131.
 Gunten et al. (1995) Gunten, B. H. R. V., Institut, P. S., and Psi, C.V. (1995), “Radioactivity: A Tool to Explore the Past I . Introduction,” Radiochimica Acta, 70/71, 305–316.
 Krishnaswamy et al. (1971) Krishnaswamy, S., Lal, D., Martin, J., and Meybeck, M. (1971), “Geochronology of lake sediments,” Earth and Planetary Science Letters, 11(15), 407–414.
 Reimer et al. (2013) Reimer, P. J., Bard, E., Bayliss, A., Beck, J. W., Blackwell, P. G., Bronk, M., Grootes, P. M., Guilderson, T. P., Haflidason, H., Hajdas, Hatte, C., Heaton, T. J., Hoffman, D. L., Hogg, A. G., Hughhen, K. A., Kaiser, J. F., Kromer, B., Manning, S. W., Niu, M., Reimer, R. W., Richards, D. A., Scott, E. M., Southon, J. R., Staff, R. A., Turney, C. S. M., and van der Plicht, J. (2013), “IntCal 13 and Marine 13 radiocarbon age calibration curves 0–50,000 years cal BP,” Radiocarbon, 55, 1869–1887.
 Robbins (1978) Robbins, J. (1978), “Geochemical and geophysical applications of radioactive lead,” The biogeochemistry of lead in the environment, pp. 285–393.
 SanchezCabeza and RuizFernández (2012) SanchezCabeza, J. A., and RuizFernández, A. C. (2012), “210Pb sediment radiochronology: An integrated formulation and classification of dating models,” Geochimica et Cosmochimica Acta, 82, 183–200.
 SanchezCabeza et al. (2014) SanchezCabeza, J. A., RuizFernández, A. C., OntiverosCuadras, J. F., Pérez Bernal, L. H., and Olid, C. (2014), “Monte Carlo uncertainty calculation of 210Pb chronologies and accumulation rates of sediments and peat bogs,” Quaternary Geochronology, 23, 80–93.
8 Data
Depth  ()  Density ()  Depth  ()  Density ()  

Depth  ()  ()  Density ()  Depth  ()  ()  Density () 

Comments
There are no comments yet.