DeepAI
Log In Sign Up

Probabilistic Clustering of Time-Evolving Distance Data

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points. The proposed method utilizes the information given by adjacent time points to find the underlying cluster structure and obtain a smooth cluster evolution. This approach allows the number of objects and clusters to differ at every time point, and no identification on the identities of the objects is needed. Further, the model does not require the number of clusters being specified in advance -- they are instead determined automatically using a Dirichlet process prior. We validate our model on synthetic data showing that the proposed method is more accurate than state-of-the-art clustering methods. Finally, we use our dynamic clustering model to analyze and illustrate the evolution of brain cancer patients over time.

READ FULL TEXT VIEW PDF
12/27/2019

Evolutionary Clustering via Message Passing

We are often interested in clustering objects that evolve over time and ...
08/02/2020

Dirichlet-tree multinomial mixtures for clustering microbiome compositions

A common routine in microbiome research is to identify reproducible patt...
07/12/2021

Cohesion and Repulsion in Bayesian Distance Clustering

Clustering in high-dimensions poses many statistical challenges. While t...
03/02/2022

Efficient Dynamic Clustering: Capturing Patterns from Historical Cluster Evolution

Clustering aims to group unlabeled objects based on similarity inherent ...
07/31/2020

Bayesian Approaches for Flexible and Informative Clustering of Microbiome Data

We propose two unsupervised clustering methods that are designed for hum...
10/29/2020

Attentive Clustering Processes

Amortized approaches to clustering have recently received renewed attent...
10/19/2022

Functional data clustering via information maximization

A new method for clustering functional data is proposed via information ...

1 Introduction

A major challenge in data analysis is to find simple representations of the data that best reveal the underlying structure of the investigated phenomenon [15]. Clustering is a powerful tool to detect such structure in empirical data, thus making it accessible to practitioners [14]

. The problem of clustering has a very long history in the data mining and machine learning communities, and numerous clustering algorithms and applications have been studied in many different scientific disciplines over the past 50 years

[13]. Applications of clustering include a large variety of problem domains as, for example, clustering text, social networks, images, or biomedical data [4, 10, 21, 28]. Traditional clustering methods such as

-means or Gaussian mixture models

[12], rely on geometric representation of the data. Nowadays, however, increasingly often there is no access to an underlying vectorial representation of the data since only pairwise similarities or distances are measured. An example application domain where such a setting frequently occurs is biomedical data analysis, where more often than not only pairwise distance data is available, e.g., when DNA or protein sequences are represented as pairwise distances or string alignments [9, 16, 23, 25, 26].
Although many clustering methods exist that work on distance data, including single linkage clustering, complete linkage clustering, and Ward’s clustering [14], these methods are static methods that are innocuous with respect to a potentially underlying time structure. However, when data is obtained at different points in time, dynamic

models are needed that take a time component into account. For example in cancer research, genes are frequently measured at different time points, in order to examine the efficiency of a medication over time. In Network Security, HTTP connections are recorded at various timestamps, since network behaviors can quickly change over time; in Computer Vision, video streams contain time-indexed sequence of images. To deal with such scenarios, dynamic models that take the evolving nature of data into account are needed. Such a requirement has been addressed with evolutionary or dynamic clustering models for

vectorial data (as for instance in [2], [7], [29], or [33]), which obtain a smooth clustering over multiple time points. However, to the best of our knowledge, no time-evolving clustering models exist that work on distance data directly, and clustering of time-evolving distance data is still an unsolved problem.
In this work we will bridge this gap and present a novel Bayesian time-evolving clustering model based on distance

data directly that is specially tailored to temporal data and does not require direct access to an underlying vector space. Our model will be able to detect cluster popularity over time, based on the rich gets richer phenomenon. We will be able to make predictions about how popular a cluster will be at time

if we already knew that it was a rich cluster at time point

. The assumption that rich clusters get richer seems plausible in many domains, for instance, a hot news topic is likely to stay hot for a given time period. Our model is also able to cope with variability of data size: the number of data points may vary between time points, for instance, data items may arrive or leave. Also, the number of clusters may vary over time and the model is able to adjust its capacity accordingly, and automatically. The aim is to find the underlying structure at every time point and to obtain a smooth cluster evolution which results in an easy interpretable model. Thereby the information shared across neighboring time points is related to the size of the clusters, the time-varying property of the clusters is assumed to be Markovian, and Markov Chain Monte Carlo (MCMC) sampling is used for inference.


The presented method is also applicable for the less general case of pairwise similarity data, by using a slightly altered likelihood. Since Mercer kernels can encode similarities between many different kinds of objects (for instance kernels on graphs, images, structures or strings) the method proposed here can in particularly cover the entire scope of applications of kernel-based learning, be it string alignment kernels over DNA or protein sequences [16, 23, 26] or diffusion kernels on graphs [30].
We validate our approach by comparing it to baseline methods on simulated data where our new model significantly outperforms state-of-the-art clustering approaches. We apply our novel model to a highly topical and challenging real world data set of brain cancer patients from Memorial Sloan Kettering Cancer Center (MSKCC). This data consists of clinical notes as part of electronic health records (EHR) of brain cancer patients over 3 consecutive years. We model brain cancer patients over time where patients are grouped together based on the similarity of sentences in the clinical notes (see Section 4.2).

2 Background

In this section we recap important background knowledge which is essential for the remainder of this paper.

Partition Process:

Let denote a set of partitions of , and denote an index set. A partition is an equivalence relation with if and otherwise. denotes a function that maps to some label set . Alternatively, may be represented as a set of disjoint non-empty subsets called “blocks”. A partition process is a series of distributions on the set in which is the marginal distribution of . This means, that for each partition , there exists a corresponding partition which is obtained by deleting the last row and column from the matrix . The properties of partition processes are in detail discussed in [19]. Such a process is called exchangeable if each is invariant under permutations of object indices, see [22] for more details. An example for the partition lattice for is shown in Fig. 1.

Figure 1: Partition lattice for .
Gauss-Dirichlet Cluster Process:

The Gauss-Dirichlet cluster process consists of an infinite sequence of points in , together with a random partition of integers into blocks. A sequence of length can be sampled as follows [17, 19]: fix the number of mixture modes , generate mixing proportions from a symmetric Dirichlet distribution Dir, generate a label sequence from a multinomial distribution and forget the labels introducing the random partition of induced by . Integrating out , one arrives at a Dirichlet-Multinomial prior over partitions

(1)

where denotes the number of blocks present in the partition and is the size of block . The limit as is well defined and known as the Ewens process (a.k.a. Chinese Restaurant process (CRP)), see for instance [11, 20, 6]. Given such a partition , a sequence of -dimensional observations , is arranged as columns of the matrix , and this

is generated from a zero-mean Gaussian distribution with covariance matrix

(2)

denotes the within-class covariance matrix and the between-class matrix, respectively, and denotes the Kronecker symbol. Since the partition process is invariant under permutations, we can always think of being block-diagonal. For spherical covariance matrices (i.e. scaled identity matrices), , the covariance structure reduces to Thus, the columns of contain independent -dimensional vectors

distributed according to a normal distribution with covariance matrix

(3)

Further, the distribution factorizes over the blocks . Introducing the symbol defining an index-vector of all objects assigned to block

, the joint distribution reads

(4)

where is the size of block and a -vector of ones. In the viewpoint of clustering, denote the number of objects we want to partition, and the dimension of each object.

Wishart-Dirichlet Cluster Process:

Assume that the random matrix

follows the zero-mean Gaussian distribution specified in (2), with and . Then, conditioned on the partition , the inner product matrix follows a (possibly singular) Wishart distribution in degrees of freedom, , as was shown in [27]. If we directly observe the dot products

, it suffices to consider the conditional probability of partitions

:

(5)
Information Loss:

Note that we assumed that there exists a matrix with such that the columns of are independent copies drawn from a zero-mean Gaussian in : . This assumption is crucial, since general mean vectors correspond to a non-central Wishart model [3], which can be calculated analytically only in special cases, and even these cases have a very complicated form which imposes severe problems in deriving efficient inference algorithms.
By moving from vectors to pairwise similarities and from similarities to pairwise distances , there is a lack of information about geometric transformations: assume we only observe without access to the vectorial representations . Then we have lost the information about orthogonal transformations with , i.e. about rotations and reflections of the rows in . If we only observe , we have additionally lost the information about translations of the rows .
The models above imply that the means in each row are expected to converge to zero as the number of replications goes to infinity. Thus, if we had access to and if we are not sure that the above zero-mean assumption holds, it might be a plausible strategy to subtract the empirical row means, , and then to construct a candidate matrix by computing the pairwise dot products. This procedure should be statistically robust if , since then the empirical means are probably close to their expected values. Such a corrected matrix fulfills two important requirements for selecting candidate dot product matrices:
First, should be “typical” with respect to the assumed Wishart model with , thereby avoiding any bias introduced by a particular choice. Second, the choice should be robust in a statistical sense: if we are given a second observation from the same underlying data source, the two selected prototypical matrices and should be similar. For small , this correction procedure is dangerous since it can introduce a strong bias even if the model is correct: suppose we are given two replications from , i.e. . After subtracting the row means, all row vectors lie on the diagonal line in , and the cluster structure is heavily distorted.
Consider now the case where we observe without access to . For “correcting” the matrix just as described above we would need a procedure which effectively subtracts the empirical row means from the rows of .
Unfortunately, there exists no such matrix transformation that operates directly on without explicit construction of . It is important to note that the “usual” centering transformation with as used in kernel PCA and related algorithms does not work here: in kernel PCA the rows of are assumed to be i.i.d. replications in . Consequently, the centered matrix is built by subtracting the column means: and . Here, we need to subtract the row means, and therefore it is inevitable to explicitly construct , which implies that we have to choose a certain orthogonal transformation . It might be reasonable to consider only rotations and to use the principal components as coordinate axes. This is essentially the kernel PCA embedding procedure: compute

and its eigenvalue decomposition

, and then project on the principal axes: . The problem with this vector-space embedding is that it is statistically robust in the above sense only if

is small, because otherwise the directions of the principal axes might be difficult to estimate, and the estimates for two replicated observations might highly fluctuate, leading to different column-mean normalizations. Note that this condition for fixing the rotation contradicts the above condition

that justifies the subtraction of the means. Further, column mean normalization will change the pairwise dissimilarities (even if the model is correct!), and this change can be drastic if is small.
The cleanest solution might be to consider the distances (which are either obtained directly as input data, or can be computed as ) and to avoid an explicit choice of and altogether. Therefore, one encodes the translation invariance directly into the likelihood, which means that the latter becomes constant on all matrices that fulfill . The information loss that occurs by moving from vectors to pairwise similarities and from similarities to pairwise distances is depicted in Fig. 2.

Figure 2: Information loss that occurs by moving from vectors to pairwise distances . By moving from to pairwise similarities , information about rotation of the vectors is lost, by moving from to , information about translation is lost. One can reconstruct a whole equivalence class of matrices (four examples are bordered in red) from one distance matrix , i. e. the reconstruction of a similarity matrix from is not unique, as there is a surjective mapping from a set of matrices to .
Translation-invariant Wishart-Dirichlet Cluster Process:

A method which works directly on distances has been discussed in ([1],[32]) as an extension of the Wishart-Dirichlet Cluster Process. These methods cluster static distance data, and no access to vectorial data is required. The model presented in [32] tackles the problem if we do not directly observe , but only a matrix of pairwise Euclidean distances . In the following, the assumption is that the (suitably pre-processed) matrix contains squared Euclidean distances with components

(6)

A squared Euclidean distance matrix is characterized by the property of being of negative type, which means that for any . This condition is equivalent to the absence of negative eigenvalues in . The distribution of has been formally studied in [18], Eq. (3.2), where it was shown that if follows a standard Wishart generated from an underlying zero-mean Gaussian process, , follows a generalized Wishart distribution, defined with respect to the transformation kernel , where . To understand the role of the transformation kernel it is useful to introduce the notion of a generalized Gaussian distribution with kernel : . For any transformation with , the meaning of the general Gaussian notation is: . It follows that under the kernel , two parameter settings and are equivalent if and , i.e. if , and , a space which is usually denoted by . It is also useful to introduce the distributional symbol for the generalized Wishart distribution of the random matrix when . The key observation in [18] is that

defines a linear transformation on symmetric matrices with kernel

which implies that the distances follow a generalized Wishart distribution with kernel : and

(7)

In the multi-dimensional case with spherical within- and between covariances we generalize the above model to Gaussian random matrices . Note that the columns of this matrix are i.i.d. copies. The distribution of the matrix of squared Euclidean distances then follows a generalized Wishart with degrees of freedom . This distribution differs from a standard Wishart in that the inverse matrix is substituted by the matrix and the determinant is substituted by a generalized -symbol which denotes the product of the nonzero eigenvalues of its matrix-valued argument (note that is rank-deficient). The conditional probability of a partition then reads

(8)

and the probability density function (which serves as likelihood function in the model) is then defined as

(9)

Note that in spite of the fact that this probability is written as a function of , it is constant over all choices of which lead to the same , i.e. independent under translations of the row vectors in . For the purpose of inferring the partition , this invariance property means that one can simply use a block-partition covariance model and assume that the (unobserved) matrix follows a standard Wishart distribution parametrized by . We do not need to care about the exact form of , since the conditional posterior for depends only on . Extensive analysis about the influence of encoding the translation invariance into the likelihood versus the standard WD process and row-mean subtraction was conducted in [32].

3 A Time-evolving Translation-invariant Wishart-Dirichlet Process

In this section, we present a novel dynamic clustering approach, the time-evolving translation-invariant Wishart-Dirichlet process (Te-TiWD) for clustering distance data that is available at multiple time points. In this model, we assume that pairwise distance data with is available over time points. At every time point all objects are fully exchangeable, and the number of data points may differ at the different time points. This model clusters data points over multiple time points, allowing group memberships and the number of clusters to evolve over time by addition, deletion or change in existing clusters. The model is based on the static clustering model that was proposed in [32] which is not able to account for a time structure. Note that our model completely ignores any information about the identities of the data points across the time points, which makes it possible to cluster different objects over time. Table 1 summarizes notations which we will use in the following sections.

distance matrix at time point (cf. (6))
matrix at time point (cf. (7))
partition matrix at time point
number of blocks present
in the partition
the size of block
size of block without object
number of data points
present at the -th time point
matrix

the between-class variance of

block and block
is defined as
is defined as
defines a first-order Markov chain
defines a first-order Markov chain
matrices at all time points
except at time point
Table 1: Notations used throughout this manuscript.

3.1 The Model

The aim of the proposed method is to cluster distance data at multiple time points, for . For every time point under consideration, , we obtain a distance matrix and we want to infer the partition matrix , by utilizing the partitions from adjacent time points. By using information from adjacent time points, we expect better clustering results than clustering every time point independently. At every time point, the number of data points may differ, and some clusters may die out or evolve over time. The assumptions on the data are the following:

Assumption 1

Given a partition , a sequence of the assumed underlying -dimensional vectorial observations , , are arranged as columns of the matrix , i.e. , with covariance matrix

(10)
Covariance matrix .

In the static clustering method, the underlying vectorial data was assumed to be distributed according to a Gaussian distribution with mean , with , (cf. (3)), where describes the between class covariance matrix. As denotes a scalar, all clusters in the static clustering are equidistant (as demonstrated in left of Fig. 3). To model time evolving data, we need a more flexible between-class covariance matrix which allows that cluster centroids have different distances to each other. These full matrices are necessary for a time-evolving clustering model, as the clusters are coupled over the different time-points due to the geometric information of the clusters, and this coupling can only be captured by modeling a richer covariance. Hereby is obtained in the following way

(11)

with . The matrix associates an object with one out of clusters. As every object can only belong to exactly one cluster, has a single element of per row. In Fig. 3 we demonstrate examples of and as well as the corresponding cluster arrangements which the matrices imply.

Figure 3: Different models for clustering. Left: example of the block diagonal structure of for three blocks, all cluster centroids must be equidistant. Right: example of the full covariance matrix (for better readability, we drop the time index in the figure), which allows differing distances between cluster centroids.

Note that is a more general version of :

Lemma 1

     iff      

Prior over the block matrices .

The prior over the block matrices is defined in the following way. The prior for

in one epoch is the Dirichlet-Multinomial prior over partitions as in (

1). Using the definition of the conditional prior over clusters as defined in [2], we extend this idea to the prior over partitions. In a generative sense, the same idea is used to generate a labeled set of partitions and then we forget the labels to get a distribution over partitions. By we denote the size of block if the corresponding block is present at time point as well. We consider the following generative process for a finite dynamic mixture model with mixtures (cf. [2], Eqs. (4.5), (4.6) and (5.9)): for each time point , we generate mixing proportions from a symmetric Dirichlet distribution . As in the static case, we generate a label sequence from a multinomial distribution and forget the labels introducing the random partition . Integrating out , the conditional distribution for Dirichlet-Multinomial prior over partitions, given the partitions in the previous time point , can be written as:

(12)

Note that (12) defines a partition process as described in section 2 with being the marginal distribution of , and it also is an exchangeable process, as each is invariant under permutation of object indices.

Prior over .

The prior over the matrices is given by a Wishart distribution, and . The degrees of freedom influences the behavior of the Wishart distribution: a low value for allows drastic changes in the clustering structure, a high value for allows fewer changes. We also have to consider that the size of , and might differ, as it is possible that the number of clusters in every epoch is different. Therefore, we consider the following two cases:

  • if there are more blocks at time than at time , i. e. :
    delete corresponding rows and columns in . With we denote the “reduced” matrix. Then it holds that

  • if there are fewer blocks at time than at time , i. e. if :
    first, draw a matrix from . Second, augment as many new rows and columns as needed to obtain the full positive definite matrix . We can draw the additional rows and columns of in the following way (see [5] for details):

    (13)

    with , , and . One obtains , and in the following way:

    (14)

    where denotes a scalar value and the degrees of freedom of the Wishart distribution .

A graphical depiction of the generative model of Te-TiWD is given in Fig. 4.


Figure 4: Depiction of the generative model of Te-TiWD with all hyper-parameters and parameters. Shaded circles correspond to fixed or observed variables, unshaded to latent variables. Arrows that point to a box mean that the parameters apply to all the variables inside the box, whereas arrows that directly point to a variable only apply to that single variable. denote the distance matrices observed at different points in time, denote the inferred partitions and the between class covariance matrices at different time points .
Posterior over and .

With the likelihood for every time point, analogous to Eq. (9), and the prior over and , we can now write down the equations for the posterior over and for all time points :

(15)
(16)

with , where (cf. (8) and (9)).

3.1.1 MCMC sampling for posterior inference.

For applying MCMC sampling to sample from the posterior, we look at the conditional distributions. Consider the conditional distributions at each time point :

(17)
Posterior sampling for .

The posterior sampling involves sampling assignments. As we are dealing with non-conjugate priors in (

17), we use a Gibbs sampling algorithm with auxiliary variables as presented in [20]. We consider the infinite model with . The aim is to assign one object in epoch to either an existing cluster , a new cluster that exists at epoch or epoch

or a totally new cluster. The prior probability that object

belongs to an exisiting cluster at time point is

(18)

There exist four different prior probabilities of an object belonging to a new cluster at time point , which are summarized in table 2.


exists at

both time points and :
(19)

time point but not at time point :
(20)

time point but not at time point
(21)

neither nor ,
(22)
(i.e.  belongs to a completely new cluster)

Table 2: Table of prior probabilities.
Metropolis-Hastings update steps.

In every time point, we need to sample values in the between-class variance matrix . To find the values within one epoch, we sample the whole “new” matrix, denoted by , with a Metropolis-Hastings algorithm (see [24]). With we denote the initial matrix. As proposal distribution we chose a Wishart distribution, and for the prior we chose a Wishart distribution as well, leading to and

Hyperparameters and Initialization.

Our model includes the following hyperparameters: the scale parameter

, the number of clusters, the Dirichlet rate , the degrees of freedom and a scale parameter . The model is not sensitive to the choice of , and we fix to 1.

is sampled from a Gamma distribution with shape and scale parameters

and . For the number of clusters, our framework is applicable to two scenarios: we can either assume which results in the CRP model, or we fix to a large constant which can be viewed as a truncated Ewens process. As the model does not suffer from the label switching problem, initialization is not a crucial problem. We initialize the block size with size 1, i. e. we start with one cluster for all objects. The Dirichlet rate only weakly influences the likelihood, and the variance only decays with (see [11]). In practice, we should not expect to reliably estimate . Rather, we should have some intuition about , maybe guided by the observation that under the Ewens process model the probability of two objects belonging to the same cluster is . We can then either define an appropriate prior distribution, or we can fix . Due to the weak effect of on conditionals, these approaches are usually very similar. The degrees of freedom can be estimated by the rank of , if it is known from a pre-processing procedure. As is not a very critical parameter (all likelihood contributions are basically raised to the power of ), might also be used as an annealing-type parameter for freezing a representative partition in the limit for .

Pseudocode.

A pseudocode of the sampling algorithm is given in Algorithm 1.

  for  to iteration do
     for  to  do
        for  to  do
           Assign one object to an existing cluster or a new one using Eqs. (17)-(2)Update
        end for
     end for
     for  to  do
        Sample new matrix using Metropolis-Hastings
     end for
  end for
Algorithm 1 Pseudocode Te-TiWD
Complexity.

We define one sweep of the Gibbs sampler as one complete update of . The most time consuming part in a sweep is the update of by re-estimating the assignments to blocks for a single object (characterized by a row/column in ), given the partition of the remaining objects. Therefore we have to compute the membership probabilities in all existing blocks (and in a new block). Every time a new partition is analyzed, a naive implementation requires costs for computing the determinant of and the product . In one sweep we need to compute such probabilities for objects, summing up to costs of . This suggests that the scalability to large datasets can pose a problem. In this regard we plan to address run time in future work by investigating the potential of variational methods, parallelizing the MCMC sampler and by updating parameters associated with multiple time points simultaneously.

Identifiability of clusters.

In some applications, it is of interest to identify and track clusters over time. For example by grouping newspaper articles into topics it might be interesting to know which topics are present over a long time period, when a new topic becomes popular and when a former popular topic dies out. Due to the translation-invariance of our novel longitudinal model, we additionally need a cluster mean to be able to track clusters over the time course. To estimate the mean of the clusters we propose to embed the “overall” data matrix with that contains the pairwise distances between all objects over all time points into a vector space, using kernel PCA. We first construct a positive semi-definite matrix which fulfills . For correcting , we compute the centered matrix with . As a next step, we compute the eigenvalue decomposition of , i.e.  and then project on the principal axes , i.e. we use the principal components as coordinate axes. By embedding the distances into a vector space, the underlying block structure might be distorted (see Fig. 2). As our aim is to find the underlying block structure, it is hence infeasible to embed the data for clustering. But, for tracking the clusters, we just need to find the mean of an already inferred block structure, i.e. we embed the data not for grouping data points, but for finding a mean of an already assigned partition that allows us to track the clusters over time. We embed all objects together and choose the same orthogonal transformations for all objects, which enables identifiability of cluster means over the time course. This preprocessing step is only necessary if one is interested in the identifiability of clusters, and needs only to be computed once outside the sampling routine. Since computing is computationally expensive, it is done only once as a preprocessing step if required. Computing within the sampling routine would slow down our sampler significantly.

4 Experiments

4.1 Synthetic Experiments

4.1.1 Well separated clusters.

In a first experiment, we test our method on simulated data. We simulate data in two ways, first we generate data points accordingly to the model assumptions, and secondly we generate data independent of the model assumptions. We start with a small experiment where we consider five time points each with 20 data points per time point in 100 dimensions, i. e. we consider a small data set size and large dimension problem.

Data generation.

The data is generated (according to the model assumptions) in the following way: for the first time point, a random block matrix of size is sampled with (i.e. we generate 3 blocks at time point 1). A matrix is sampled from and is filled with the corresponding values from , which leads to the matrix . Next, samples from are drawn with , where , and stored in the matrix . By choosing , we create well separated clusters. The similarity matrix is computed and squared distances are stored in matrix . For the following time points , the partition for the block matrix of size is drawn from a Dirichlet-Multinomial distribution, conditioned on the partition at time point . A new matrix is sampled from . If the number of blocks in time points and are different, we sample according to Eq. (14). samples from are drawn with . The pairwise distances are stored in the matrix . A PCA projection of this data is shown in Fig. 5 for illustration.

Figure 5: PCA projections of five time points with three well separated clusters per time point. Numbers and colors correspond to true labels.
Experiments.

We perform four illustrative experiments for well-separated data:

  1. 500 Gibbs sweeps are computed for the Te-TiWD cluster process (after a burn-in phase of 250 sweeps). We check convergence of the algorithm by analyzing the trace of the number of blocks during sampling. On this trace plot we observe after how many sweeps the sampler stabilizes (the number of sweeps depends on the size of the data set). We observe a remarkable stability of the sampler (compared to the usual situations in traditional mixture models), which follows from the fact that no label-switching can appear. Finally, we perform an annealing procedure to freeze a certain partition. Here, is used as an annealing-type parameter for freezing a representative partition in the limit . On a standard computer, this experiment took roughly 4 minutes, and the sampler stabilizes after roughly 50 sweeps. As the ground truth is known, we can compute the adjusted rand index as an indicator for the accuracy of the Te-TiWD model. We repeat the clustering process 50 times. The result is shown in form of a box plot (Te-TIWD) in Fig. 6.

  2. In order to compare the performance of the time-evolving model (Te-TiWD) to baseline models, we also run the static probabilistic clustering process as well as hierarchical clustering models (Ward, complete linkage and single linkage) on every time point separately and compute the averaged accuracy over all time points. For the comparison to the static probabilistic method

    [32], we use the same set-up as for Te-TiWD, we run 500 Gibbs sweeps with a burn-in phase of 250 sweeps and repeat it for 50 times. For the hierarchical methods, the resulting trees are cut at the number of clusters found by the nonparametric probabilistic model. Accuracy is computed for every time point separately, and then averaged over all time points. In this scenario, the static clustering models performs almost as well as the time-evolving clustering, see Fig. 6, as expected in such a setting where all groups are well separated at every single time point.

  3. As a further comparison to a baseline dynamic clustering model, we embed the distances into a Euclidean vector space and run a Gaussian dynamic clustering model (Te-Gauss) on the embedded vectorial data. As the clusters are well separated, embedding the data and clustering on vectors works well, as shown in box plot “Te-Gauss” in Fig. 6.

  4. As a last comparison we evaluate a pooled clustering over all time points. For this experiment, we not only need the pairwise distances at every single time point, but also the pairwise distances of objects across all time points. The number of sweeps and repetitions remains the same as in the experiments above. We conduct one clustering over all objects of all time points, and after clustering, we extract the objects belonging to the same time point and compute the rand index on every time point separately. This experiment shows worse results (see box plot “pooled” in Fig. 6), which can be explained as follows: by combining all time points to one data matrix, new clusters over all time points are found, this means clusters are shifted and objects over time are grouped together, introducing new clusters by reforming boundaries of old clusters. These new clusters inhibit objects to group together which would group together at single time points, destroying the underlying “true” cluster structure.

Figure 6: We compare our new dynamic model (Te-TiWD) with baseline methods: static clustering as in [32], combined clustering over all time points (pooled), a Gaussian time-evolving clustering model (Te-Gauss) as well as to Ward, complete linkage and single linkage. In this experiment with three well separated clusters per time point, all methods perform very well, except for pooling the data.

4.1.2 Highly overlapping clusters.

For a second experiment, we generated data in a similar way as above, but this time we create 5 highly overlapping clusters each with 200 data points per time point in 40 dimensions. A PCA projection of this data is shown in Fig. 8. On a standard computer, this experiment took roughly 3 hours, and the sampler stabilizes after roughly 500 sweeps. Again, we compare the performance of the translation-invariant time-evolving clustering model with static state-of-the-art probabilistic and hierarchical clustering models which cluster on every time point separately and a time-varying Gaussian clustering model on embedded data (Te-Gauss). For highly overlapping clusters, the new dynamic clustering model outperforms the static probabilistic clustering model [32], and the hierarchical models (Ward, complete linkage, single linkage) fail completely. Further, our new model Te-TiWD outperforms the dynamic, vectorial clustering model (Te-Gauss), demonstrating that embedding the data into a Euclidean vector space yields worse results than working on the distances directly. We tested the statistical significance with the Kruskal-Wallis rank-sum test and the Dunn post test with Bonferroni correction for pairwise analysis. These tests show that Te-TiWD performs significantly better than all clustering models we compared to. Results are shown in Fig. 7.

Figure 7: We compare our new model (Te-TiWD) with baseline methods on synthetic data for five highly overlapping clusters. Our model significantly outperforms all baseline methods.

Figure 8: PCA projections of five time points of simulated data with five highly overlapping clusters. Numbers and colors correspond to true labels.

4.1.3 Data generation independent of model assumptions.

Figure 9: We compare our new model (Te-TiWD) with baseline methods on synthetic data which is generated independent of the model assumptions for five highly overlapping clusters. We observe that our method significantly outperforms all baseline methods.

We also generate data in a second way which is independent of the model assumptions to demonstrate that the performance of our model Te-TiWD is independent of the way the data was generated. To demonstrate this, we repeat the case of highly overlapping clusters over 5 time points and generate data in the following way: dynamic Gaussian clusters are generated over a period of 5 time points. At each time point five clusters are generated. 200 data points are available at every time point and randomly split into 5 parts, every part representing the number of data points per cluster. For consecutive time points, the number of data points per cluster is sampled from a Dirichlet-Multinomial distribution. Every cluster is sampled from a Gaussian distribution with a large variance, resulting in highly overlapping clusters. Between time steps, the cluster centers move randomly, with relocations sampled from the same distribution. Finally, at every time point, the model-based pairwise distance matrix is computed, resulting in a series of moving distance matrices. On this second synthetic data set, Te-TiWD performs significantly better than all baseline methods as well, as shown in Fig. 9. Note that for the comparison with the Gaussian dynamic clustering model (Te-Gauss) we first embed the distances into vectorial data and do not work on the simulated vectorial data directly, to obtain a fair comparison.

4.2 Analysis of Brain Cancer Patient based on Electronic Health Records (EHR)

We apply our proposed model to a dataset of clinical notes from brain cancer patients at Memorial Sloan Kettering Cancer Center (MSKCC). Brain cancer patients make up 1.4% of all cancer patients, annually. Survival is highly variable, depending on age, gender, cancer subtype, and progression when caught, but on average 33% of patients survive the first five years.
As a first step, we partition a total of 195,297 sentences from 3,403 electronic health records (EHR) from 704 MSKCC brain cancer patients into groups of similar vocabulary. This is done by treating sentences as binary vectors with non-zero entries corresponding to vocabulary, and obtaining a similarity measure using ranked neighborhood comparisons [31]. Sentences are clustered using this similarity measure with the Louvain method [8]. Using these sentence clusters as features, we obtain patient similarities with the same ranked neighborhood comparison method. We partition the patients documents into windows of one year each, and obtain three time points where enough documents are available to compute similarities between patients. At each year, we represent a patient with a binary vector whose length is the number of sentence clusters. A non-zero entry corresponds to an occurrence of that sentence cluster in the patient’s corpus during the specified time period.

Figure 10: Clusters over all 3 years of brain cancer patients. We find ten different cluster chains where 2 remain over al three years, 3 vanish after the second year and one new cluster comes up in year 2 and remains in the third year. Size of the tokens denote the cluster size, i.e. the number of patients per cluster. Note that patients can change clusters, so a cluster decreasing in size or disappearing does not necessarily mean those patients die or leave the hospital.

In the first year, we have 704 patients, in the second year 170 and in the third year 123 patients. This data set has specific features which make our model particularly suitable to cope with this kind of data. First, the number of patients differs in every year. Second, patients disappear over the time course, either due to death or due to leaving the hospital. Third, patients do not necessarily need to have a document every year, so a patient can be absent from year 2 and appear in year 3. This gap occurred a total of 31 times in our data set. This is why our flexible model is very well suited for this problem, as the model can deal with changing numbers of objects and changing number of clusters in every year, clusters can disappear or reappear, as well as patients. The result of our clustering model is shown in Fig. 10. On a standard computer, this experiment took roughly 6 hours, and the sampler stabilizes after roughly 500 sweeps.
We observe ten different cluster chains over the time series. Note that patients can switch cluster chains over the years, as the tumor progresses, the status of the patient may change, resulting in more similarities to a different cluster chain than the year before. To analyze the results of the method, we will discuss the most and least deadly clusters in more detail, as analyzing all subtleties between clusters would be out of the scope of this paper.
Cluster chain 1 is the most deadly cluster, with a death rate of 80%. Additionally, it only appears in the first year. Word clouds representing the sentence clusters of this patient group are shows in Fig. 11. We can see that these patients are having seizures which indicates that the brain cancer is especially malicious. They also show sentence clusters about two types of blood cancers, b cell and mantle cell lymphoma, and prescription of cytarabine, which treats these cancers. This combination of blood and brain cancers could explain why this cluster chain is so deadly.

Figure 11: Word clouds representing five sentence clusters that are observed in patients from cluster chain 1, the most deadly cluster. They describe patients that have blood cancers (lymphomas) in addition to brain cancer.

Cluster chain 5 is the least deadly chain with a death rate of 42%. Word clouds representing sentence clusters for this patient group are show in Fig. 12. These clusters consist of mainly ”follow-up” language, such as checking the patients’ gait, speech, reflexes and vision. The sentence clusters appear to indicate positive results, e.g. ”Normal visual fields are intact”, and ”Patient denies difficulty with speech, language, balance or gait” are two prototype sentences representing two sentence clusters that appear in this chain. Furthermore, there is a sentence cluster with prototype sentence ”no evidence for progression,” indicating that these patient’s cancers are in a manageable state.

Figure 12: Word clouds representing four sentence clusters that are observed in patients from cluster chain 5, the most positive cluster. These sentence clusters are ”follow-up” language, such as checking reflexes or the ability to walk and see well. This indicates that the patients are in a relatively stable state under regular observation.

Modeling patients over time provides important insights for automated analyses and medical doctors, as it is possible to check for every patient how the state of the patient as represented by the cluster membership changes over time. Also, if a new patient enters the study, one can infer, based on similarity to other patients, how to classify and possibly treat this patient best or to suggest clinical trials for each patient. Such clustering methods therefore make an important step towards solving the technical challenges of personalized cancer treatment.

5 Conclusion

In this work, we propose a novel dynamic Bayesian clustering model to cluster time-evolving distance data. A probabilistic model that is able to handle non-vectorial data in form of pairwise distances has the advantage that there is no need to embed the data into a vector space. To summarize, our contributions in this work are five-fold: i) We develop a dynamic probabilistic clustering approach that circumvents the potentially problematic data embedding step by directly operating on pairwise time-evolving distance data. ii) Our model enables to track the clusters over time, giving information about clusters that die out or emerge over time. iii) By using a Dirichlet process prior, there is no need to fix the number of clusters in advance. iv) We test and validate our model on simulated data. We compare the performance of our new method with baseline probabilistic and hierarchical clustering methods. v) We use our model to cluster brain cancer patients into similar subgroups over a time course of three years. Dynamic partitioning of patients would play an important role in cancer treatment, as it enables inference from groups of similar patients to an individual. Such an inference can help medical doctors to adapt or optimize existing treatments, assign billing codes, or predict survival times for a patient based on similar patients in the same group.

Acknowledgments

We thank Natalie Davidson, Theofanis Karaletsos and David Kuo for helpful discussions and suggestions. JV and MK were partly funded through postdoctoral fellowships awarded by the Swiss National Science Foundation (SNSF; under PBBSP2_146758) and by the German Research Foundation (DFG; under Kl 2698/1-1 and VO 2003/1-1), respectively. We gratefully acknowledge funding from Memorial Sloan Kettering Cancer Center and the National Cancer Institute (grant 1R01CA176785-01A1). Access to patient data is covered under IRB Waiver #WA0426-13.

References

  • [1] David Adametz and Volker Roth. Bayesian partitioning of large-scale distance data. In NIPS, pages 1368–1376, 2011.
  • [2] Amr Ahmed and Eric Xing. Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. Proceedings of The Eighth SIAM International Conference on Data Mining (SDM), 2008.
  • [3] T.W. Anderson. The non-central wishart distribution and certain problems of multivariate statistics. Ann. Math. Statist., 17(4):409–431, 1946.
  • [4] Seema Bandyopadhyay and Edward J Coyle. An energy efficient hierarchical clustering algorithm for wireless sensor networks. In INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies, volume 3, pages 1713–1723. IEEE, 2003.
  • [5] M. Bilodeau and D. Brenner. Theory of Multivariate Statistics. Springer, 1999.
  • [6] D. Blei and M. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1:121–144, 2006.
  • [7] David M. Blei and Peter Frazier. Distance dependent chinese restaurant processes. Journal of Machine Learning Reseach, 12(12):2461–2488, 2011.
  • [8] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experimen, 2008.
  • [9] M Cuturi and J-P Vert. A mutual information kernel for strings. In

    Proceedings of the International Joint Conference on Neural Network

    , 2004.
  • [10] Michael B Eisen, Paul T Spellman, Patrick O Brown, and David Botstein. Cluster analysis and display of genome-wide expression patterns. Proceedings of the National Academy of Sciences, 95(25):14863–14868, 1998.
  • [11] W.J. Ewens. The sampling theory of selectively neutral alleles. Theoretical Population Biology, 3:87–112, 1972.
  • [12] T. S Ferguson. A bayesian analysis of some nonparametric problems. Annals of Statistics 1, pages 209–230, 1973.
  • [13] Anil K. Jain.

    Data clustering: 50 years beyond k-means, 2008.

  • [14] Anil K Jain and Richard C Dubes. Algorithms for clustering data. Prentice-Hall, Inc., 1988.
  • [15] Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788–791, 1999.
  • [16] C. Leslie, E. Eskin, A. Cohen, J. Weston, and W.S. Noble. Mismatch string kernel for discriminative protein classification. Bioinformatics, 1(1):1–10, 2003.
  • [17] S.N. MacEachern. Estimating normal means with a conjugate-style Dirichlet process prior. Communication in Statistics: Simulation and Computation, 23:727–741, 1994.
  • [18] P. McCullagh. Marginal likelihood for distance matrices. Statistica Sinica, 19:631–649, 2009.
  • [19] P. McCullagh and J. Yang. How many clusters? Bayesian Analysis, 3:101–120, 2008.
  • [20] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000.
  • [21] Andrew Y Ng, Michael I Jordan, Yair Weiss, et al.

    On spectral clustering: Analysis and an algorithm.

    Advances in neural information processing systems, 2:849–856, 2002.
  • [22] J. Pitman. Combinatorial stochastic processes. In J. Picard, editor, Ecole d’Ete de Probabilites de Saint-Flour XXXII-2002. Springer, 2006.
  • [23] Gunnar Rätsch and Sören Sonnenburg. Accurate splice site prediction for caenorhabditis elegans. In Kernel Methods in Computational Biology, MIT Press series on Computational Molecular Biology, pages 277–298. MIT Press, 2004.
  • [24] C.P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2005.
  • [25] H Saigo, J-P Vert, N Ueda, and T Akutsu. Protein homology detection using string alignment kernels. Bioinformatics, 20(11):1682–1689, 2004.
  • [26] Sören Sonnenburg, Gunnar Rätsch, and Konrad Rieck. Large scale learning with string kernels. In Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors, Large Scale Kernel Machines, pages 73–103. MIT Press, Cambridge, MA., 2007.
  • [27] M.S. Srivastava.

    Singular Wishart and multivariate beta distributions.

    Annals of Statistics, 31(2):1537–1560, 2003.
  • [28] Michael Steinbach, George Karypis, Vipin Kumar, et al. A comparison of document clustering techniques. In KDD workshop on text mining, volume 400, pages 525–526. Boston, 2000.
  • [29] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations with fragmentation-coagulation processes. In Advances In Neural Information Processing Systems, 2011.
  • [30] S. V. N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. Graph kernels. J. Mach. Learn. Res., 11:1201–1242, August 2010.
  • [31] Julia E. Vogt. Unsupervised structure detection in biomedical data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2015.
  • [32] Julia E. Vogt, Sandhya Prabhakaran, Thomas J. Fuchs, and Volker Roth. The translation-invariant Wishart-Dirichlet process for clustering distance data. In ICML, pages 1111–1118, 2010.
  • [33] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Time-sensitive dirichlet process mixture models. Technical report, Carnegie Mellon University, 2005.