The partitioning of time series data into segments of piecewise stationary statistics is important in many fields ranging from the analysis of accelerometer data corresponding to human gait motion to the analysis of biomedical data 
. Bayesian approaches that consider the change point transition times and number of segments as statistical parameters are particularly interesting. Such techniques enable the modeller to incorporate uncertainty in the statistical parameters being estimated, thereby providing the means of controlling the model complexity with respect to the model fit.
The work in 
proposed a fully hierarchical Bayesian model where the uncertainty in the relevant parameters of interest were captured using appropriate prior probabilities. A reversible jump Markov Chain Monte Carlo (MCMC) technique was then used in order to obtain estimates of the relevant parameters. While closed form analytical expressions to the posterior derived in is not possible, the work in  proposed an efficient recursive solution in order to compute such an estimate. More recently, a multivariate extension that captures changes in the dependency structure of data was proposed in . While the work in 
proposed a nonparametric Bayesian method for detecting changes in the variance of data.
Many change point detection algorithms often assume that parameters from different segments are distinct, however, infinite hidden Markov models (IHMM) and Dirichlet process mixture models (DPMM) assume that data at each time point can be generated by a parameter that belongs to a potentially infinite number of states or classes that is determined from the data set. In particular these methods have been introduced into change point detection algorithms . However, such work often assigns a parameter (belonging to a particular state) label to each time point and not the parameters corresponding to a given segment.
In this paper we propose a mean-shift (that can be generalised to parameters of a statistical model) Bayesian change point detection algorithm that exploits repetition in segment parameters for more robust segmentation. This is achieved by extending the change point detection algorithm introduced in , by including parameter class labels that employ a Dirichlet process prior for identifying the number of distinct segment parameters. Simulations on synthetic and real world data verify the efficacy of the proposed method.
Ii-a MCMC Change Point Detection
Given a set of data points and transition times where and , there exist segments such that for each segment the following functional relationship between the data points (within the time indices ) and the statistical parameter is satisfied, that is
for , where is a set of i.i.d. zero mean Gaussian noise samples (with a specific variance of ). The hierarchical Bayesian model introduced in , derived a posterior distribution such that the target parameters of interest were both and , namely
corresponds to the vector of segment parameters (that is integrated out of the posterior) and
represents the set of hyperparameters for the relevant prior probabilities. The likelihood function used in the posterior distribution (2) assumes that the parameters are distinct for each segment , that is
Ii-B Dirichlet Process Mixture Model
A finite mixture model (FMM) assumes that the data is drawn from a weighted combination of distributions from the same parametric family with differing parameters . That is, , where is the number of classes, is the mixing coefficient and
corresponds to the class parameter/s of the probability distribution. For a given data set, determining the number of classes in a systematic way can be a challenging task. In order to overcome this problem, we must first consider the generative model of the FMM, where the is introduced, that is
where corresponds to the prior distribution of the parameters, and . The mixing coefficients in the model (4) govern the likelihood of selecting a given class. By employing a Dirichlet distribution prior on the mixing coefficients and taking the limit , results in the Dirichlet process (DP) mixture model , which is often written as
where is drawn from the Dirichlet process with base measure
. Inference of the DP mixture model is generally carried out using Gibbs sampling, where the state of the Markov chain consists of all the parameters and class labels. A particularly interesting outcome of the DP mixture model is in the inference of the class parameters (which does not require the direct specification of the number of classes). That is, the conditional posterior probability of assigning a data point to an existing class is given by
where is the number of data points (excluding ) assigned to class and is a vector of class labels excluding . While the conditional class posterior probability for assigning the data point to a new class is given by
Iii Proposed Work
We propose a novel mean-shift change detection algorithm by including parameter class labels in the technique proposed in . That is, given a set of data points x and transition times , where the functional relationship defined in (1) for mean-shift change detection (assuming distinct parameters in each segment ) is given by , with corresponding to the mean of the segment and is a vector with elements equal to 1. Each segment has a distinct parameter ; however, data often exhibit parameters that repeat across different segments, therefore the model in  does not efficiently use the similarity in the segment parameters.
We propose to include parameter class label , such that the mean parameters
, are generated by the following Gaussian mixture model
where and correspond to the class parameters. For each segment the functional relationship in (1) is given by , where different segments now can be assigned to the same class of parameters (see Fig. 1) . As a result, data in segments with the same parameter labels are combined for more robust segmentation.
Iii-a Bayesian Model
The following model formally states the proposed mean-shift change point algorithm that includes parameter labelling
corresponds to a Binomial distribution,is the joint prior distribution of the class mean and variance, is the prior distribution of the variance of the data points with the same class label and
corresponds to the joint Normal distribution.
The state of the Markov chain consists of the following parameters, , where and . Inference of the parameters is carried out by using a Metropolis-Hastings-within-Gibbs sampling scheme. The Gibbs moves are performed on each parameter in the set, , while a variation of the Metropolis-Hastings algorithm is used to obtain samples for the parameters .
The Gibbs sampling procedure requires the conditional posterior distributions for all class means and variances , along with the conditional posterior variance of the data points within the same class and each class label . The exact derivation of these conditional posterior distributions were carried out in both  and ; where we have assumed the following class mean prior, , that is dependent on . Furthermore, the class variance has an inverse Gamma prior given by, .
The conditional posterior distribution of the parameters is given by,
. Owing to the selection of the appropriate conjugate priors, we can integrate out the nuisance parameters. This is carried out by considering the following posterior distribution
where , and has uniform probability between . The likelihood function is given by
where data within segments with the same parameter label are combined for potentially more accurate parameter estimation (in the mean squared error sense). Integration of (10) with respect to the parameters results in the following expression for the conditional posterior distribution of the parameters
where is the concatenated vector of all data points with the same segment label , is the number of data points with label , and , with . Finally, we note that there are some challenges from drawing samples from (11) due to the dependence on that we have addressed in the next section.
Iii-B MCMC Sampling
Samples for parameters, are obtained by drawing samples from the following posterior densities: , , and where details on exact distribution form and sampling are found in . While is found by concatenating all the data points that have the same segment label . More details on exact distribution form and sampling can be found in .
In order to evaluate the conditional posterior distribution we use a modification of the Metropolis-Hastings algorithm outlined in  that incorporates segment labels . Given the current state of the Markov chain , we select one of the steps with the following probabilities:
birth of a change point with probability,
death of a change point with probability,
update of change point positions with probability,
where for , and for .
A birth move consists of proposing a new change point with uniform probability from the existing time indices excluding the time indices . The proposed set of change points including is given by , where the segment between the time indices () with the class variable is split into two new segments with two new class variables . As we have not yet inferred the new class labels from the conditional class posterior distributions, we assume that the two classes are distinct and thus independent from all other segments, to circumvent the lack of information we have for assignment to an existing class. The proposed transition time is accepted with the following probability, , where
with , , and .
The death move proposes to remove a transition time , by choosing with uniform probability from the set . That is, the segments and where , are combined into one segment . Furthermore, the class labels are combined into one segment with a new class label (utilising the argument used for the birth of a change point), that is for all . The removal of is accepted with probability .
The update of the change points is carried by first removing the time index in and proposing a new change point at some new location. That is, the death move is first applied followed by a birth move, for all .
Iv-a Synthetic Data
We evaluated the performance of the proposed algorithm under two different scenarios. Namely, the first scenario considered that every segment was produced with a randomly generated mean parameter (with fixed variance), while the second scenario assumed that each segment has a mean parameter (with fixed variance) drawn from a fixed number of repeating classes. Furthermore, each segment length was drawn with uniform probability between samples, while the number of segments were also selected with a random probability (each realisation on average had approximately 7 change points). The proposed method was compared with the following algorithms: MCMC , Group Fused LASSO  and PELTS . The following parameters were selected for the proposed method: , , , , , and (in general we fix all the parameters except and ); we note that parameters for each method were selected such that the proportion of false positives were as close as possible for each method. We evaluated the performance of the respective algorithms using the following measures: the proportion of true positives along with the absolute error in the change point location estimate.
From Table 1 it can be observed that the proposed method outperformed both the MCMC in 
along with the Group Fused LASSO, with respect to the number of true positives and change point location estimation, for time series data with random mean parameter assignment. Furthermore, Table 2 shows that the proposed method was able to significantly outperform the MCMC and Group Fused LASSO algorithms with respect to the proportion of true positives when segmenting data with repeating mean parameters. However, the PELTS algorithm was able to outperform the proposed method with respect to the proportion of true positives and change point location estimation (as shown in both Table 1 and 2). The PELTS algorithm represents the state of the art for detecting changes with respect to a fixed statistical parameter (using dynamic programming), whereas the proposed Bayesian change point detection algorithm can cater for linear statistical models (e.g. autoregressive model) with arbitrary model orders for each segment, along with the prediction of change points using the predictive posterior distribution which is not possible with PELTS.
|Methods||True Positives||False Positives||Error|
|G. F. LASSO |
V Conclusions and Future Work
This work proposes a mean-shift change point detection algorithm that captures parameter repetition for more robust time series segmentation. This was achieved by labelling parameters for a given segment and by employing a Dirichlet process prior. We have shown the advantage of the proposed method on synthetic and real world data. Future work will extend the proposed method for a wider class of time series models as well as including hyperparameter updates.
-  E. Sejdic, C. M. Steele, and T. Chau, “Segmentation of dual-axis swallowing accelerometry signals in healthy subjects with analysis of anthropometric effects on duration of swallowing activities,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 4, pp. 1090–1097, 2009.
-  E. Punskaya, C. Andrieu, A. Doucet, and W. Fitzgerald, “Bayesian curve fitting using MCMC with applications to signal segmentation,” IEEE Transactions on Signal Processing, vol. 50, no. 3, pp. 747–758, 2002.
-  P. Fearnhead, “Exact Bayesian curve fitting and signal segmentation,” IEEE Transactions on Signal Processing, vol. 53, no. 6, pp. 2160–2166, 2005.
X. Xuan and K. Murphy, “Modeling changing dependency structure in
multivariate time series,”
International Conference on Machine Learning, pp. 1055–1062, 2007.
-  A. A. Hensley and P. M. Djuric, “Nonparametric learning for hidden Markov Models with preferential attachment dynamics,” International Conference on Acoustics, Speech and Signal Processing, pp. 3854–3858, 2017.
-  C. E. Rasmussen, “The infinite Gaussian mixture model,” Advances in Neural Information Processing Systems 12, pp. 554–560, 2000.
-  E. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky, “Bayesian nonparametric inference of switching dynamic linear models,” IEEE Transactions on Signal Processing, vol. 59, no. 4, pp. 1569–1585, 2011.
-  S. I. M. Ko, T. T. L. Chong, and P. Ghosh, “Dirichlet process hidden Markov multiple change-point model,” Bayesian Analysis, vol. 10, no. 2, pp. 275––296, 2015.
-  R. M. Neal, “Markov chain sampling methods for Dirichlet process mixture models,” Journal of Computational and Graphical Statistics, vol. 2, no. 2, pp. 249–265, 2000.
-  K. Bleakley and J.-P. Vert, “The group fused LASSO for multiple change-point detection,” arXiv:1106.4199, 2011.
-  R. Killick, P. Fearnhead, and I. A. Eckley, “Optimal detection of changepoints with a linear computational cost,” Journal of the American Statistical Association, vol. 500, no. 107, pp. 1590––1598, 2012.