1 Introduction
A major challenge in data analysis is to find simple representations of the data that best reveal the underlying structure of the investigated phenomenon [15]. Clustering is a powerful tool to detect such structure in empirical data, thus making it accessible to practitioners [14]
. The problem of clustering has a very long history in the data mining and machine learning communities, and numerous clustering algorithms and applications have been studied in many different scientific disciplines over the past 50 years
[13]. Applications of clustering include a large variety of problem domains as, for example, clustering text, social networks, images, or biomedical data [4, 10, 21, 28]. Traditional clustering methods such asmeans or Gaussian mixture models
[12], rely on geometric representation of the data. Nowadays, however, increasingly often there is no access to an underlying vectorial representation of the data since only pairwise similarities or distances are measured. An example application domain where such a setting frequently occurs is biomedical data analysis, where more often than not only pairwise distance data is available, e.g., when DNA or protein sequences are represented as pairwise distances or string alignments [9, 16, 23, 25, 26].Although many clustering methods exist that work on distance data, including single linkage clustering, complete linkage clustering, and Ward’s clustering [14], these methods are static methods that are innocuous with respect to a potentially underlying time structure. However, when data is obtained at different points in time, dynamic
models are needed that take a time component into account. For example in cancer research, genes are frequently measured at different time points, in order to examine the efficiency of a medication over time. In Network Security, HTTP connections are recorded at various timestamps, since network behaviors can quickly change over time; in Computer Vision, video streams contain timeindexed sequence of images. To deal with such scenarios, dynamic models that take the evolving nature of data into account are needed. Such a requirement has been addressed with evolutionary or dynamic clustering models for
vectorial data (as for instance in [2], [7], [29], or [33]), which obtain a smooth clustering over multiple time points. However, to the best of our knowledge, no timeevolving clustering models exist that work on distance data directly, and clustering of timeevolving distance data is still an unsolved problem.In this work we will bridge this gap and present a novel Bayesian timeevolving clustering model based on distance
data directly that is specially tailored to temporal data and does not require direct access to an underlying vector space. Our model will be able to detect cluster popularity over time, based on the rich gets richer phenomenon. We will be able to make predictions about how popular a cluster will be at time
if we already knew that it was a rich cluster at time point. The assumption that rich clusters get richer seems plausible in many domains, for instance, a hot news topic is likely to stay hot for a given time period. Our model is also able to cope with variability of data size: the number of data points may vary between time points, for instance, data items may arrive or leave. Also, the number of clusters may vary over time and the model is able to adjust its capacity accordingly, and automatically. The aim is to find the underlying structure at every time point and to obtain a smooth cluster evolution which results in an easy interpretable model. Thereby the information shared across neighboring time points is related to the size of the clusters, the timevarying property of the clusters is assumed to be Markovian, and Markov Chain Monte Carlo (MCMC) sampling is used for inference.
The presented method is also applicable for the less general case of pairwise similarity data, by using a slightly altered likelihood. Since Mercer kernels can encode similarities between many different kinds of objects (for instance kernels on graphs, images, structures or strings) the method proposed here can in particularly cover the entire scope of applications of kernelbased learning, be it string alignment kernels over DNA or protein sequences [16, 23, 26] or diffusion kernels on graphs [30].
We validate our approach by comparing it to baseline methods on simulated data where our new model significantly outperforms stateoftheart clustering approaches. We apply our novel model to a highly topical and challenging real world data set of brain cancer patients from Memorial Sloan Kettering Cancer Center (MSKCC). This data consists of clinical notes as part of electronic health records (EHR) of brain cancer patients over 3 consecutive years. We model brain cancer patients over time where patients are grouped together based on the similarity of sentences in the clinical notes (see Section 4.2).
2 Background
In this section we recap important background knowledge which is essential for the remainder of this paper.
Partition Process:
Let denote a set of partitions of , and denote an index set. A partition is an equivalence relation with if and otherwise. denotes a function that maps to some label set . Alternatively, may be represented as a set of disjoint nonempty subsets called “blocks”. A partition process is a series of distributions on the set in which is the marginal distribution of . This means, that for each partition , there exists a corresponding partition which is obtained by deleting the last row and column from the matrix . The properties of partition processes are in detail discussed in [19]. Such a process is called exchangeable if each is invariant under permutations of object indices, see [22] for more details. An example for the partition lattice for is shown in Fig. 1.
GaussDirichlet Cluster Process:
The GaussDirichlet cluster process consists of an infinite sequence of points in , together with a random partition of integers into blocks. A sequence of length can be sampled as follows [17, 19]: fix the number of mixture modes , generate mixing proportions from a symmetric Dirichlet distribution Dir, generate a label sequence from a multinomial distribution and forget the labels introducing the random partition of induced by . Integrating out , one arrives at a DirichletMultinomial prior over partitions
(1) 
where denotes the number of blocks present in the partition and is the size of block . The limit as is well defined and known as the Ewens process (a.k.a. Chinese Restaurant process (CRP)), see for instance [11, 20, 6]. Given such a partition , a sequence of dimensional observations , is arranged as columns of the matrix , and this
is generated from a zeromean Gaussian distribution with covariance matrix
(2) 
denotes the withinclass covariance matrix and the betweenclass matrix, respectively, and denotes the Kronecker symbol. Since the partition process is invariant under permutations, we can always think of being blockdiagonal. For spherical covariance matrices (i.e. scaled identity matrices), , the covariance structure reduces to Thus, the columns of contain independent dimensional vectors
distributed according to a normal distribution with covariance matrix
(3) 
Further, the distribution factorizes over the blocks . Introducing the symbol defining an indexvector of all objects assigned to block
, the joint distribution reads
(4) 
where is the size of block and a vector of ones. In the viewpoint of clustering, denote the number of objects we want to partition, and the dimension of each object.
WishartDirichlet Cluster Process:
Assume that the random matrix
follows the zeromean Gaussian distribution specified in (2), with and . Then, conditioned on the partition , the inner product matrix follows a (possibly singular) Wishart distribution in degrees of freedom, , as was shown in [27]. If we directly observe the dot products, it suffices to consider the conditional probability of partitions
:(5) 
Information Loss:
Note that we assumed that there exists a matrix with
such that the columns of are
independent copies drawn from a zeromean Gaussian in
: . This assumption is crucial, since general mean vectors
correspond to a noncentral Wishart model
[3], which can be calculated analytically
only in special cases, and even these cases have a very complicated
form which imposes severe problems in deriving efficient inference
algorithms.
By moving from vectors to pairwise
similarities and from similarities to pairwise distances ,
there is a lack of information about geometric transformations:
assume we only observe without access to the vectorial
representations . Then we have lost the information
about orthogonal transformations with , i.e. about rotations and reflections of the rows in . If we
only observe , we have additionally lost the information about
translations of the rows .
The models above
imply that the means in each row are expected to converge to zero as
the number of replications goes to infinity. Thus, if we had
access to and if we are not sure that the above zeromean
assumption holds, it might be a plausible strategy to subtract the
empirical row means, , and then to construct a candidate
matrix by computing the pairwise dot products. This procedure
should be statistically robust if , since then the empirical
means are probably close to their expected values. Such a corrected
matrix fulfills two important requirements for selecting
candidate dot product matrices:
First, should be “typical”
with respect to the assumed Wishart model with ,
thereby avoiding any bias introduced by a particular choice. Second,
the choice should be robust in a statistical sense: if we are given
a second observation from the same underlying data source, the two
selected prototypical matrices and should be
similar. For small , this correction procedure is dangerous since
it can introduce a strong bias even if the model is correct: suppose
we are given two replications from , i.e. . After subtracting the row means, all row
vectors lie on the diagonal line in , and the cluster
structure is heavily distorted.
Consider now the case
where we observe without access to . For “correcting” the
matrix just as described above we would need a procedure which
effectively subtracts the empirical row means from the rows of .
Unfortunately, there exists no such matrix
transformation that operates directly on without explicit
construction of . It is important to note that the “usual”
centering transformation with as used in kernel PCA and related
algorithms does not work here: in kernel PCA the rows of are
assumed to be i.i.d. replications in . Consequently,
the centered matrix is built by subtracting the column
means: and . Here, we need to
subtract the row means, and therefore it is inevitable to
explicitly construct , which implies that we have to choose a
certain orthogonal transformation . It might be reasonable to
consider only rotations and to use the principal components as
coordinate axes. This is essentially the kernel PCA embedding
procedure: compute
and its eigenvalue decomposition
, and then project on the principal axes: . The problem with this vectorspace embedding is that it is statistically robust in the above sense only ifis small, because otherwise the directions of the principal axes might be difficult to estimate, and the estimates for two replicated observations might highly fluctuate, leading to different columnmean normalizations. Note that this condition for fixing the rotation contradicts the above condition
that justifies the subtraction of the means. Further, column mean normalization will change the pairwise dissimilarities (even if the model is correct!), and this change can be drastic if is small.The cleanest solution might be to consider the distances (which are either obtained directly as input data, or can be computed as ) and to avoid an explicit choice of and altogether. Therefore, one encodes the translation invariance directly into the likelihood, which means that the latter becomes constant on all matrices that fulfill . The information loss that occurs by moving from vectors to pairwise similarities and from similarities to pairwise distances is depicted in Fig. 2.
Translationinvariant WishartDirichlet Cluster Process:
A method which works directly on distances has been discussed in ([1],[32]) as an extension of the WishartDirichlet Cluster Process. These methods cluster static distance data, and no access to vectorial data is required. The model presented in [32] tackles the problem if we do not directly observe , but only a matrix of pairwise Euclidean distances . In the following, the assumption is that the (suitably preprocessed) matrix contains squared Euclidean distances with components
(6) 
A squared Euclidean distance matrix is characterized by the property of being of negative type, which means that for any . This condition is equivalent to the absence of negative eigenvalues in . The distribution of has been formally studied in [18], Eq. (3.2), where it was shown that if follows a standard Wishart generated from an underlying zeromean Gaussian process, , follows a generalized Wishart distribution, defined with respect to the transformation kernel , where . To understand the role of the transformation kernel it is useful to introduce the notion of a generalized Gaussian distribution with kernel : . For any transformation with , the meaning of the general Gaussian notation is: . It follows that under the kernel , two parameter settings and are equivalent if and , i.e. if , and , a space which is usually denoted by . It is also useful to introduce the distributional symbol for the generalized Wishart distribution of the random matrix when . The key observation in [18] is that
defines a linear transformation on symmetric matrices with kernel
which implies that the distances follow a generalized Wishart distribution with kernel : and(7) 
In the multidimensional case with spherical within and between covariances we generalize the above model to Gaussian random matrices . Note that the columns of this matrix are i.i.d. copies. The distribution of the matrix of squared Euclidean distances then follows a generalized Wishart with degrees of freedom . This distribution differs from a standard Wishart in that the inverse matrix is substituted by the matrix and the determinant is substituted by a generalized symbol which denotes the product of the nonzero eigenvalues of its matrixvalued argument (note that is rankdeficient). The conditional probability of a partition then reads
(8) 
and the probability density function (which serves as likelihood function in the model) is then defined as
(9) 
Note that in spite of the fact that this probability is written as a function of , it is constant over all choices of which lead to the same , i.e. independent under translations of the row vectors in . For the purpose of inferring the partition , this invariance property means that one can simply use a blockpartition covariance model and assume that the (unobserved) matrix follows a standard Wishart distribution parametrized by . We do not need to care about the exact form of , since the conditional posterior for depends only on . Extensive analysis about the influence of encoding the translation invariance into the likelihood versus the standard WD process and rowmean subtraction was conducted in [32].
3 A Timeevolving Translationinvariant WishartDirichlet Process
In this section, we present a novel dynamic clustering approach, the timeevolving translationinvariant WishartDirichlet process (TeTiWD) for clustering distance data that is available at multiple time points. In this model, we assume that pairwise distance data with is available over time points. At every time point all objects are fully exchangeable, and the number of data points may differ at the different time points. This model clusters data points over multiple time points, allowing group memberships and the number of clusters to evolve over time by addition, deletion or change in existing clusters. The model is based on the static clustering model that was proposed in [32] which is not able to account for a time structure. Note that our model completely ignores any information about the identities of the data points across the time points, which makes it possible to cluster different objects over time. Table 1 summarizes notations which we will use in the following sections.
distance matrix at time point (cf. (6))  

matrix at time point (cf. (7))  
partition matrix at time point  
number of blocks present  
in the partition  
the size of block  
size of block without object  
number of data points  
present at the th time point  
matrix  
the betweenclass variance of 

block and block  
is defined as  
is defined as  
defines a firstorder Markov chain  
defines a firstorder Markov chain  
matrices at all time points  
except at time point 
3.1 The Model
The aim of the proposed method is to cluster distance data at multiple time points, for . For every time point under consideration, , we obtain a distance matrix and we want to infer the partition matrix , by utilizing the partitions from adjacent time points. By using information from adjacent time points, we expect better clustering results than clustering every time point independently. At every time point, the number of data points may differ, and some clusters may die out or evolve over time. The assumptions on the data are the following:
Assumption 1
Given a partition , a sequence of the assumed underlying dimensional vectorial observations , , are arranged as columns of the matrix , i.e. , with covariance matrix
(10) 
Covariance matrix .
In the static clustering method, the underlying vectorial data was assumed to be distributed according to a Gaussian distribution with mean , with , (cf. (3)), where describes the between class covariance matrix. As denotes a scalar, all clusters in the static clustering are equidistant (as demonstrated in left of Fig. 3). To model time evolving data, we need a more flexible betweenclass covariance matrix which allows that cluster centroids have different distances to each other. These full matrices are necessary for a timeevolving clustering model, as the clusters are coupled over the different timepoints due to the geometric information of the clusters, and this coupling can only be captured by modeling a richer covariance. Hereby is obtained in the following way
(11) 
with . The matrix associates an object with one out of clusters. As every object can only belong to exactly one cluster, has a single element of per row.
In Fig. 3 we demonstrate examples of and as well as the corresponding cluster arrangements which the matrices imply.
Note that is a more general version of :
Lemma 1
iff
Prior over the block matrices .
The prior over the block matrices is defined in the following way. The prior for
in one epoch is the DirichletMultinomial prior over partitions as in (
1). Using the definition of the conditional prior over clusters as defined in [2], we extend this idea to the prior over partitions. In a generative sense, the same idea is used to generate a labeled set of partitions and then we forget the labels to get a distribution over partitions. By we denote the size of block if the corresponding block is present at time point as well. We consider the following generative process for a finite dynamic mixture model with mixtures (cf. [2], Eqs. (4.5), (4.6) and (5.9)): for each time point , we generate mixing proportions from a symmetric Dirichlet distribution . As in the static case, we generate a label sequence from a multinomial distribution and forget the labels introducing the random partition . Integrating out , the conditional distribution for DirichletMultinomial prior over partitions, given the partitions in the previous time point , can be written as:(12) 
Note that (12) defines a partition process as described in section 2 with being the marginal distribution of , and it also is an exchangeable process, as each is invariant under permutation of object indices.
Prior over .
The prior over the matrices is given by a Wishart distribution, and . The degrees of freedom influences the behavior of the Wishart distribution: a low value for allows drastic changes in the clustering structure, a high value for allows fewer changes. We also have to consider that the size of , and might differ, as it is possible that the number of clusters in every epoch is different. Therefore, we consider the following two cases:

if there are more blocks at time than at time , i. e. :
delete corresponding rows and columns in . With we denote the “reduced” matrix. Then it holds that 
if there are fewer blocks at time than at time , i. e. if :
first, draw a matrix from . Second, augment as many new rows and columns as needed to obtain the full positive definite matrix . We can draw the additional rows and columns of in the following way (see [5] for details):(13) with , , and . One obtains , and in the following way:
(14) where denotes a scalar value and the degrees of freedom of the Wishart distribution .
A graphical depiction of the generative model of TeTiWD is given in Fig. 4.
Posterior over and .
3.1.1 MCMC sampling for posterior inference.
For applying MCMC sampling to sample from the posterior, we look at the conditional distributions. Consider the conditional distributions at each time point :
(17) 
Posterior sampling for .
The posterior sampling involves sampling assignments. As we are dealing with nonconjugate priors in (
17), we use a Gibbs sampling algorithm with auxiliary variables as presented in [20]. We consider the infinite model with . The aim is to assign one object in epoch to either an existing cluster , a new cluster that exists at epoch or epochor a totally new cluster. The prior probability that object
belongs to an exisiting cluster at time point is(18) 
There exist four different prior probabilities of an object belonging to a new cluster at time point , which are summarized in table 2.
exists at 


both time points and : 
(19)  
time point but not at time point : 
(20)  
time point but not at time point 
(21)  
neither nor , 
(22)  
(i.e. belongs to a completely new cluster)  

MetropolisHastings update steps.
In every time point, we need to sample values in the betweenclass variance matrix . To find the values within one epoch, we sample the whole “new” matrix, denoted by , with a MetropolisHastings algorithm (see [24]). With we denote the initial matrix. As proposal distribution we chose a Wishart distribution, and for the prior we chose a Wishart distribution as well, leading to and
Hyperparameters and Initialization.
Our model includes the following hyperparameters: the scale parameter
, the number of clusters, the Dirichlet rate , the degrees of freedom and a scale parameter . The model is not sensitive to the choice of , and we fix to 1.is sampled from a Gamma distribution with shape and scale parameters
and . For the number of clusters, our framework is applicable to two scenarios: we can either assume which results in the CRP model, or we fix to a large constant which can be viewed as a truncated Ewens process. As the model does not suffer from the label switching problem, initialization is not a crucial problem. We initialize the block size with size 1, i. e. we start with one cluster for all objects. The Dirichlet rate only weakly influences the likelihood, and the variance only decays with (see [11]). In practice, we should not expect to reliably estimate . Rather, we should have some intuition about , maybe guided by the observation that under the Ewens process model the probability of two objects belonging to the same cluster is . We can then either define an appropriate prior distribution, or we can fix . Due to the weak effect of on conditionals, these approaches are usually very similar. The degrees of freedom can be estimated by the rank of , if it is known from a preprocessing procedure. As is not a very critical parameter (all likelihood contributions are basically raised to the power of ), might also be used as an annealingtype parameter for freezing a representative partition in the limit for .Pseudocode.
A pseudocode of the sampling algorithm is given in Algorithm 1.
Complexity.
We define one sweep of the Gibbs sampler as one complete update of . The most time consuming part in a sweep is the update of by reestimating the assignments to blocks for a single object (characterized by a row/column in ), given the partition of the remaining objects. Therefore we have to compute the membership probabilities in all existing blocks (and in a new block). Every time a new partition is analyzed, a naive implementation requires costs for computing the determinant of and the product . In one sweep we need to compute such probabilities for objects, summing up to costs of . This suggests that the scalability to large datasets can pose a problem. In this regard we plan to address run time in future work by investigating the potential of variational methods, parallelizing the MCMC sampler and by updating parameters associated with multiple time points simultaneously.
Identifiability of clusters.
In some applications, it is of interest to identify and track clusters over time. For example by grouping newspaper articles into topics it might be interesting to know which topics are present over a long time period, when a new topic becomes popular and when a former popular topic dies out. Due to the translationinvariance of our novel longitudinal model, we additionally need a cluster mean to be able to track clusters over the time course. To estimate the mean of the clusters we propose to embed the “overall” data matrix with that contains the pairwise distances between all objects over all time points into a vector space, using kernel PCA. We first construct a positive semidefinite matrix which fulfills . For correcting , we compute the centered matrix with . As a next step, we compute the eigenvalue decomposition of , i.e. and then project on the principal axes , i.e. we use the principal components as coordinate axes. By embedding the distances into a vector space, the underlying block structure might be distorted (see Fig. 2). As our aim is to find the underlying block structure, it is hence infeasible to embed the data for clustering. But, for tracking the clusters, we just need to find the mean of an already inferred block structure, i.e. we embed the data not for grouping data points, but for finding a mean of an already assigned partition that allows us to track the clusters over time. We embed all objects together and choose the same orthogonal transformations for all objects, which enables identifiability of cluster means over the time course. This preprocessing step is only necessary if one is interested in the identifiability of clusters, and needs only to be computed once outside the sampling routine. Since computing is computationally expensive, it is done only once as a preprocessing step if required. Computing within the sampling routine would slow down our sampler significantly.
4 Experiments
4.1 Synthetic Experiments
4.1.1 Well separated clusters.
In a first experiment, we test our method on simulated data. We simulate data in two ways, first we generate data points accordingly to the model assumptions, and secondly we generate data independent of the model assumptions. We start with a small experiment where we consider five time points each with 20 data points per time point in 100 dimensions, i. e. we consider a small data set size and large dimension problem.
Data generation.
The data is generated (according to the model assumptions) in the following way: for the first time point, a random block matrix of size is sampled with (i.e. we generate 3 blocks at time point 1). A matrix is sampled from and is filled with the corresponding values from , which leads to the matrix . Next, samples from are drawn with , where , and stored in the matrix . By choosing , we create well separated clusters. The similarity matrix is computed and squared distances are stored in matrix . For the following time points , the partition for the block matrix of size is drawn from a DirichletMultinomial distribution, conditioned on the partition at time point . A new matrix is sampled from . If the number of blocks in time points and are different, we sample according to Eq. (14). samples from are drawn with . The pairwise distances are stored in the matrix . A PCA projection of this data is shown in Fig. 5 for illustration.
Experiments.
We perform four illustrative experiments for wellseparated data:

500 Gibbs sweeps are computed for the TeTiWD cluster process (after a burnin phase of 250 sweeps). We check convergence of the algorithm by analyzing the trace of the number of blocks during sampling. On this trace plot we observe after how many sweeps the sampler stabilizes (the number of sweeps depends on the size of the data set). We observe a remarkable stability of the sampler (compared to the usual situations in traditional mixture models), which follows from the fact that no labelswitching can appear. Finally, we perform an annealing procedure to freeze a certain partition. Here, is used as an annealingtype parameter for freezing a representative partition in the limit . On a standard computer, this experiment took roughly 4 minutes, and the sampler stabilizes after roughly 50 sweeps. As the ground truth is known, we can compute the adjusted rand index as an indicator for the accuracy of the TeTiWD model. We repeat the clustering process 50 times. The result is shown in form of a box plot (TeTIWD) in Fig. 6.

In order to compare the performance of the timeevolving model (TeTiWD) to baseline models, we also run the static probabilistic clustering process as well as hierarchical clustering models (Ward, complete linkage and single linkage) on every time point separately and compute the averaged accuracy over all time points. For the comparison to the static probabilistic method
[32], we use the same setup as for TeTiWD, we run 500 Gibbs sweeps with a burnin phase of 250 sweeps and repeat it for 50 times. For the hierarchical methods, the resulting trees are cut at the number of clusters found by the nonparametric probabilistic model. Accuracy is computed for every time point separately, and then averaged over all time points. In this scenario, the static clustering models performs almost as well as the timeevolving clustering, see Fig. 6, as expected in such a setting where all groups are well separated at every single time point. 
As a further comparison to a baseline dynamic clustering model, we embed the distances into a Euclidean vector space and run a Gaussian dynamic clustering model (TeGauss) on the embedded vectorial data. As the clusters are well separated, embedding the data and clustering on vectors works well, as shown in box plot “TeGauss” in Fig. 6.

As a last comparison we evaluate a pooled clustering over all time points. For this experiment, we not only need the pairwise distances at every single time point, but also the pairwise distances of objects across all time points. The number of sweeps and repetitions remains the same as in the experiments above. We conduct one clustering over all objects of all time points, and after clustering, we extract the objects belonging to the same time point and compute the rand index on every time point separately. This experiment shows worse results (see box plot “pooled” in Fig. 6), which can be explained as follows: by combining all time points to one data matrix, new clusters over all time points are found, this means clusters are shifted and objects over time are grouped together, introducing new clusters by reforming boundaries of old clusters. These new clusters inhibit objects to group together which would group together at single time points, destroying the underlying “true” cluster structure.
4.1.2 Highly overlapping clusters.
For a second experiment, we generated data in a similar way as above, but this time we create 5 highly overlapping clusters each with 200 data points per time point in 40 dimensions. A PCA projection of this data is shown in Fig. 8. On a standard computer, this experiment took roughly 3 hours, and the sampler stabilizes after roughly 500 sweeps. Again, we compare the performance of the translationinvariant timeevolving clustering model with static stateoftheart probabilistic and hierarchical clustering models which cluster on every time point separately and a timevarying Gaussian clustering model on embedded data (TeGauss). For highly overlapping clusters, the new dynamic clustering model outperforms the static probabilistic clustering model [32], and the hierarchical models (Ward, complete linkage, single linkage) fail completely. Further, our new model TeTiWD outperforms the dynamic, vectorial clustering model (TeGauss), demonstrating that embedding the data into a Euclidean vector space yields worse results than working on the distances directly. We tested the statistical significance with the KruskalWallis ranksum test and the Dunn post test with Bonferroni correction for pairwise analysis. These tests show that TeTiWD performs significantly better than all clustering models we compared to. Results are shown in Fig. 7.
4.1.3 Data generation independent of model assumptions.
We also generate data in a second way which is independent of the model assumptions to demonstrate that the performance of our model TeTiWD is independent of the way the data was generated. To demonstrate this, we repeat the case of highly overlapping clusters over 5 time points and generate data in the following way: dynamic Gaussian clusters are generated over a period of 5 time points. At each time point five clusters are generated. 200 data points are available at every time point and randomly split into 5 parts, every part representing the number of data points per cluster. For consecutive time points, the number of data points per cluster is sampled from a DirichletMultinomial distribution. Every cluster is sampled from a Gaussian distribution with a large variance, resulting in highly overlapping clusters. Between time steps, the cluster centers move randomly, with relocations sampled from the same distribution. Finally, at every time point, the modelbased pairwise distance matrix is computed, resulting in a series of moving distance matrices. On this second synthetic data set, TeTiWD performs significantly better than all baseline methods as well, as shown in Fig. 9. Note that for the comparison with the Gaussian dynamic clustering model (TeGauss) we first embed the distances into vectorial data and do not work on the simulated vectorial data directly, to obtain a fair comparison.
4.2 Analysis of Brain Cancer Patient based on Electronic Health Records (EHR)
We apply our proposed model to a dataset of clinical
notes from brain cancer patients at Memorial Sloan Kettering Cancer
Center (MSKCC). Brain cancer patients make up 1.4% of all cancer
patients, annually. Survival is highly variable, depending on age,
gender, cancer subtype, and progression when caught, but
on average 33% of patients survive the first five years.
As a first step, we partition a total of 195,297 sentences from 3,403
electronic health records (EHR) from 704 MSKCC brain cancer patients
into groups of similar vocabulary. This is done by treating sentences
as binary vectors with nonzero entries corresponding to vocabulary,
and obtaining a similarity measure using ranked neighborhood
comparisons [31]. Sentences are clustered using this
similarity measure with the Louvain method [8]. Using
these sentence clusters as features, we obtain patient similarities
with the same ranked neighborhood comparison method. We partition the
patients documents into windows of one year each, and obtain three
time points where enough documents are available to compute
similarities between patients. At each year, we represent a patient
with a binary vector whose length is the number of sentence clusters.
A nonzero entry corresponds to an occurrence of that sentence cluster
in the patient’s corpus during the specified time period.
In the first year, we have 704 patients, in the second
year 170 and in the third year 123 patients. This data set has
specific features which make our model particularly suitable to cope
with this kind of data. First, the number of patients differs in
every year. Second, patients disappear over the time course, either
due to death or due to leaving the hospital. Third, patients do not
necessarily need to have a document every year, so a patient can be
absent from year 2 and appear in year 3. This gap occurred a total of
31 times in our data set.
This is why our flexible model is very well suited for this problem,
as the model can deal with changing numbers of objects and changing
number of clusters in every year, clusters can disappear or
reappear, as well as patients. The result of our clustering model is
shown in Fig. 10. On a standard computer, this
experiment took roughly 6 hours, and the sampler stabilizes after
roughly 500 sweeps.
We observe ten different cluster chains over the time
series. Note that patients can switch cluster chains over the
years, as the tumor progresses, the status of the patient may
change, resulting in more similarities to a different cluster chain
than the year before. To analyze the results of the method, we
will discuss the most and least deadly clusters in more detail,
as analyzing all subtleties between clusters would be out of the
scope of this paper.
Cluster chain 1 is the most deadly cluster, with a death rate of
80%. Additionally, it only appears in the first year. Word clouds
representing the sentence clusters of this patient group are shows
in Fig. 11. We can see that these patients are having seizures which
indicates that the brain cancer is especially malicious. They also
show sentence clusters about two types of blood cancers, b
cell and mantle cell lymphoma, and prescription of
cytarabine, which treats these cancers. This combination of
blood and brain cancers could explain why this cluster chain is so
deadly.
Cluster chain 5 is the least deadly chain with a death rate of 42%. Word clouds representing sentence clusters for this patient group are show in Fig. 12. These clusters consist of mainly ”followup” language, such as checking the patients’ gait, speech, reflexes and vision. The sentence clusters appear to indicate positive results, e.g. ”Normal visual fields are intact”, and ”Patient denies difficulty with speech, language, balance or gait” are two prototype sentences representing two sentence clusters that appear in this chain. Furthermore, there is a sentence cluster with prototype sentence ”no evidence for progression,” indicating that these patient’s cancers are in a manageable state.
Modeling patients over time provides important insights for automated analyses and medical doctors, as it is possible to check for every patient how the state of the patient as represented by the cluster membership changes over time. Also, if a new patient enters the study, one can infer, based on similarity to other patients, how to classify and possibly treat this patient best or to suggest clinical trials for each patient. Such clustering methods therefore make an important step towards solving the technical challenges of personalized cancer treatment.
5 Conclusion
In this work, we propose a novel dynamic Bayesian clustering model to cluster timeevolving distance data. A probabilistic model that is able to handle nonvectorial data in form of pairwise distances has the advantage that there is no need to embed the data into a vector space. To summarize, our contributions in this work are fivefold: i) We develop a dynamic probabilistic clustering approach that circumvents the potentially problematic data embedding step by directly operating on pairwise timeevolving distance data. ii) Our model enables to track the clusters over time, giving information about clusters that die out or emerge over time. iii) By using a Dirichlet process prior, there is no need to fix the number of clusters in advance. iv) We test and validate our model on simulated data. We compare the performance of our new method with baseline probabilistic and hierarchical clustering methods. v) We use our model to cluster brain cancer patients into similar subgroups over a time course of three years. Dynamic partitioning of patients would play an important role in cancer treatment, as it enables inference from groups of similar patients to an individual. Such an inference can help medical doctors to adapt or optimize existing treatments, assign billing codes, or predict survival times for a patient based on similar patients in the same group.
Acknowledgments
We thank Natalie Davidson, Theofanis Karaletsos and David Kuo for helpful discussions and suggestions. JV and MK were partly funded through postdoctoral fellowships awarded by the Swiss National Science Foundation (SNSF; under PBBSP2_146758) and by the German Research Foundation (DFG; under Kl 2698/11 and VO 2003/11), respectively. We gratefully acknowledge funding from Memorial Sloan Kettering Cancer Center and the National Cancer Institute (grant 1R01CA17678501A1). Access to patient data is covered under IRB Waiver #WA042613.
References
 [1] David Adametz and Volker Roth. Bayesian partitioning of largescale distance data. In NIPS, pages 1368–1376, 2011.
 [2] Amr Ahmed and Eric Xing. Dynamic nonparametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. Proceedings of The Eighth SIAM International Conference on Data Mining (SDM), 2008.
 [3] T.W. Anderson. The noncentral wishart distribution and certain problems of multivariate statistics. Ann. Math. Statist., 17(4):409–431, 1946.
 [4] Seema Bandyopadhyay and Edward J Coyle. An energy efficient hierarchical clustering algorithm for wireless sensor networks. In INFOCOM 2003. TwentySecond Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies, volume 3, pages 1713–1723. IEEE, 2003.
 [5] M. Bilodeau and D. Brenner. Theory of Multivariate Statistics. Springer, 1999.
 [6] D. Blei and M. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1:121–144, 2006.
 [7] David M. Blei and Peter Frazier. Distance dependent chinese restaurant processes. Journal of Machine Learning Reseach, 12(12):2461–2488, 2011.
 [8] Vincent D Blondel, JeanLoup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experimen, 2008.

[9]
M Cuturi and JP Vert.
A mutual information kernel for strings.
In
Proceedings of the International Joint Conference on Neural Network
, 2004.  [10] Michael B Eisen, Paul T Spellman, Patrick O Brown, and David Botstein. Cluster analysis and display of genomewide expression patterns. Proceedings of the National Academy of Sciences, 95(25):14863–14868, 1998.
 [11] W.J. Ewens. The sampling theory of selectively neutral alleles. Theoretical Population Biology, 3:87–112, 1972.
 [12] T. S Ferguson. A bayesian analysis of some nonparametric problems. Annals of Statistics 1, pages 209–230, 1973.

[13]
Anil K. Jain.
Data clustering: 50 years beyond kmeans, 2008.
 [14] Anil K Jain and Richard C Dubes. Algorithms for clustering data. PrenticeHall, Inc., 1988.
 [15] Daniel D Lee and H Sebastian Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401(6755):788–791, 1999.
 [16] C. Leslie, E. Eskin, A. Cohen, J. Weston, and W.S. Noble. Mismatch string kernel for discriminative protein classification. Bioinformatics, 1(1):1–10, 2003.
 [17] S.N. MacEachern. Estimating normal means with a conjugatestyle Dirichlet process prior. Communication in Statistics: Simulation and Computation, 23:727–741, 1994.
 [18] P. McCullagh. Marginal likelihood for distance matrices. Statistica Sinica, 19:631–649, 2009.
 [19] P. McCullagh and J. Yang. How many clusters? Bayesian Analysis, 3:101–120, 2008.
 [20] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000.

[21]
Andrew Y Ng, Michael I Jordan, Yair Weiss, et al.
On spectral clustering: Analysis and an algorithm.
Advances in neural information processing systems, 2:849–856, 2002.  [22] J. Pitman. Combinatorial stochastic processes. In J. Picard, editor, Ecole d’Ete de Probabilites de SaintFlour XXXII2002. Springer, 2006.
 [23] Gunnar Rätsch and Sören Sonnenburg. Accurate splice site prediction for caenorhabditis elegans. In Kernel Methods in Computational Biology, MIT Press series on Computational Molecular Biology, pages 277–298. MIT Press, 2004.
 [24] C.P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2005.
 [25] H Saigo, JP Vert, N Ueda, and T Akutsu. Protein homology detection using string alignment kernels. Bioinformatics, 20(11):1682–1689, 2004.
 [26] Sören Sonnenburg, Gunnar Rätsch, and Konrad Rieck. Large scale learning with string kernels. In Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors, Large Scale Kernel Machines, pages 73–103. MIT Press, Cambridge, MA., 2007.

[27]
M.S. Srivastava.
Singular Wishart and multivariate beta distributions.
Annals of Statistics, 31(2):1537–1560, 2003.  [28] Michael Steinbach, George Karypis, Vipin Kumar, et al. A comparison of document clustering techniques. In KDD workshop on text mining, volume 400, pages 525–526. Boston, 2000.
 [29] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations with fragmentationcoagulation processes. In Advances In Neural Information Processing Systems, 2011.
 [30] S. V. N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. Graph kernels. J. Mach. Learn. Res., 11:1201–1242, August 2010.
 [31] Julia E. Vogt. Unsupervised structure detection in biomedical data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2015.
 [32] Julia E. Vogt, Sandhya Prabhakaran, Thomas J. Fuchs, and Volker Roth. The translationinvariant WishartDirichlet process for clustering distance data. In ICML, pages 1111–1118, 2010.
 [33] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Timesensitive dirichlet process mixture models. Technical report, Carnegie Mellon University, 2005.