A flexible EM-like clustering algorithm for noisy data

07/02/2019
by   Matthieu Jonckheere, et al.
0

We design a new robust clustering algorithm that can deal efficiently with noise and outliers in diverse datasets. As an EM-like algorithm, it is based on both estimations of clusters centers and covariances but also on a scale parameter per data-point. This allows the algorithm to accommodate for heavier/lighter tails distributions (in comparison to classical Gaussian distributions) and outliers without significantly loosing efficiency in classical scenarios. Convergence and accuracy of the algorithm are first analyzed by considering synthetic data. Then, we show that the proposed algorithm outperforms other classical unsupervised methods of the literature such as k-means, the EM algorithm and HDBSCAN when applied to real datasets as MNIST, NORB and 20newsgroups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/25/2020

Unsupervised K-Means Clustering Algorithm

The k-means algorithm is generally the most known and used clustering me...
03/18/2022

Statistical analysis of a hierarchical clustering algorithm with outliers

It is well known that the classical single linkage algorithm usually fai...
02/10/2020

K-bMOM: a robust Lloyd-type clustering algorithm based on bootstrap Median-of-Means

We propose a new clustering algorithm that is robust to the presence of ...
04/13/2018

Adversarial Clustering: A Grid Based Clustering Algorithm Against Active Adversaries

Nowadays more and more data are gathered for detecting and preventing cy...
01/07/2014

Key point selection and clustering of swimmer coordination through Sparse Fisher-EM

To answer the existence of optimal swimmer learning/teaching strategies,...
01/25/2021

An Environmentally-Adaptive Hawkes Process with An Application to COVID-19

We proposed a new generalized model based on the classical Hawkes proces...
02/20/2022

Clustering by the Probability Distributions from Extreme Value Theory

Clustering is an essential task to unsupervised learning. It tries to au...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering tasks consist in arranging a set of elements into groups with homogeneous properties/features that capture some important structure of the whole set. Being the main paradigm of unsupervised learning, clustering has become of great interest due to the considerable increase in the amount of unlabeled data in the recent years. As the characteristics of real-life data-in geometrical and statistical terms- are very diverse, an intensive research effort has been dedicated to define various clustering algorithms which adapt to some particular features and structural properties. See

[henning, sklearn] for discussions on the different methods and how to choose among them. In this work, we assume that each cluster can be well represented as a set of points dispersed around a prototype. Each prototype is a summary of the location of each group and could be one of its data points. With this representation assumption, we leave out of our scope toy examples like concentric circles where other families of algorithms have good performance (see for instance [spectral]).

Among the different types of clustering algorithms, we here focus on the so-called model-based family. A well-known multivariate probabilistic model is the Gaussian Mixture Model (GMM), which represents the distribution of the data as a random variable given by a mixture of Gaussian distributions. The corresponding clustering criterion is simple: all points drawn from one of these normal distributions are considered to belong to the same cluster. The parameters of this model can be estimated using the two-step iterative Expectation-Maximization (EM) algorithm

[EMalgo], based on the maximization of the likelihood. In particular for the GMM case, closed-form expressions exist for obtaining parameters estimations at the maximization step.

It is known that the EM algorithm can have a really poor performance in the presence of noise [badperf]. This phenomenon can be simply explained by the non-robustness of the estimators that are computed in the algorithm: means and sample covariance matrices [maronna]. In order to improve the performance, two main strategies were developed. One consists in modifying the model to take into account the noise and the other one is to keep the original model and replace the estimators by others that are able to deal with outliers.

Several variations of the Gaussian mixture model have been developed. We will mainly be interested in those variations that target the problem of mixtures of more general distributions, which allow to model a wider range of data, allowing for the presence of noise and outliers. Regarding the use of non-Gaussian distributions, an important model defined as a mixture of Student’s -distributions was proposed in [c2]

. In that paper, the authors developed an algorithm to estimate the parameters of the mixture with known and unknown degrees of freedom by maximizing the likelihood. Furthermore, in

[c5]

the authors addressed the task of clustering by mixture models with this particular distribution. More recently, hyperbolic and skew t-distributions were considered in

[hyper].

We focus in the present paper on addressing the robustness issue of GMM by proposing a model-based algorithm that provides good clustering performance in the case of more general distribution mixtures. Our method is inspired by the robust applications of the Elliptical Symmetric (ES) distributions [boente, esa]. Elliptical distributions have been widely used in many applications where Gaussian distributions are not good-enough approximations because of the presence of heavy tails or outliers [CFAR, radar]. This family includes, among others, the class of compound-Gaussian distributions [CG, CG2, c1] that contains Gaussian, Student’s and distributions. The method we develop mimics indeed a mixture of distributions within the latter very general class. Thus, our model is intended to be, by construction, more general than the classic GMM and the model proposed in [c2].

Other robust clustering approaches worth mentioning are models which add an extra term to the usual Gaussian likelihood and algorithms with modifications inspired by usual robust techniques as robust point estimators, robust scales, weights for observations and trimming techniques. For instance, [unif] considered the presence of a uniform noise as background while [RIMLE] defined a pseudo-likelihood that filters the low density areas. In [Spatial-EM], the authors replaced the usual mean and sample covariance by the spatial median and the rank covariance matrix (RCM). In [ktau] a robust scale is used to define a -means-like algorithm that can deal with outliers. Moreover, in [weights1, weights2, weighted] different weights for the observations were proposed. Small weights correspond, as usual in the robust literature, to observations that are far from the cluster centers. Finally, the trimming algorithms leave out of the parameter estimation a proportion of data points that are far from all the means [Tclust].

In this paper we present an algorithm with the following features:

  • it follows the two steps expectation and maximization of EM algorithms,

  • it derives estimations of clusters centers and covariances which turn out to be robust,

  • it also estimate one scale parameter per data point, increasing the overall clustering flexibility.

Though the number of parameters gets of the same order as the amount of data, the algorithm does not loose much in efficiency, even for instance when the data is close to Gaussian. On the contrary, it can actually be more efficient for noisy data; gaining flexibility to accommodate for larger tails and outliers.

The induced clustering performance is improved compared to k-means, the EM algorithm and HDBSCAN [HDBSCAN1, HDBSCAN2] when applied to real datasets such as MNIST [MNIST], NORB [NORB] and 20newsgroups [20newsgroup]. Furthermore, our algorithm is able to provide accurate estimations of location and dispersion parameters even in the presence of heavy tails or noise as proved in simulations.

The rest of the paper is organized as follows. In section 2, we present our clustering algorithm and discuss some of its important aspects in detail. Section 3 is devoted to the experimental results, which allow us to show the improved performance of our method for different real datasets in comparison with other commonly used methods such as k-means. Finally, our conclusions are stated in section 4.

Notation:Vectors (resp. matrices) are denoted by boldfaced lowercase letters (resp. uppercase letters). represents the transpose of , represents the determinant of and represents the trace of . stands for “independent and identically distributed”, w.r.t stands for “with respect to” and means “is distributed as”.

2 The Model

In this section, we present a detailed description of the proposed robust clustering algorithm. Given a set of data points in

we start by considering points as samples from a mixture of distributions with the following probability density function (pdf):

(1)

where represents the proportion of the distribution in the mixture and its pdf depending on the parameters grouped in .

Some distributions of interest include the Elliptically Symmetric distributions. An -dimensional random vector from the distribution is ES-distributed if its pdf can be written as

(2)

where is a constant, is any function (called the density generator) such that (2) defines a pdf, is the location parameter (mean) and is the scatter matrix. The matrix reflects the structure of the covariance matrix of (the covariance matrix is equal to

up to a scale factor if the distribution has a finite second-order moment, see

[ollila2012complex] for details). This is denoted .

Interestingly, such modelling admits a Stochastic Representation Theorem. if and only if it admits the following stochastic representation [Yao73]

(3)

where the non-negative real random variable , called the modular variate, is independent of the random vector

that is uniformly distributed on the unit

-sphere and is a factorization of .

ES distributions include the class of compound Gaussian distributions. This particular family generalizes, in a symmetric way, the Student’s

distributions by multiplying a Gaussian by a random variable representing a radial univariate variance. A random variable

following a compound-Gaussian distribution can be written as

(4)

where corresponds to the mean, is a positive random variable independent from , and . Generally, some one-dimensional constraints on is assumed for identifiability (between and ) conditions. In this work, we assume .

This class of distributions contains in particular the Gaussian and the heavy-tailed Student’s cases. If the scale variable is discrete constantly equal to 1, then we have a classical Gaussian distribution. In terms of pdf, Normal (Gaussian) distribution is a particular case of ES distributions in which and . Furthermore, when it is a -distribution with degrees of freedom. This class includes other possibilities such as the k-distribution where .

To have some connections between the most general model of ES distributions and the sub-class of compound Gaussian distributions, a Gaussian-core representation of CES has been introduced in [draskovic2018new]. The stochastic representation can be rewritten using the fact that , where . Hence, a random vector can be represented as

(5)

with and defined as in Eq. (3). If is independent of , the vector follows a compound-Gaussian distribution.

Because we want to keep the flexibility of this model, we do not set a particular distribution for in Eq. (4) to estimate the cluster distribution in a parametric way. Instead, we get inspiration from the compound-Gaussian family to model each cluster in an approximated way. We assume that each data point , from the cluster of the mixture, is the result of multiplying a sample from a multivariate zero-mean Gaussian by a deterministic constant and a consequent translation, i.e. it can be written as

(6)

with the mean a real vector, a deterministic constant or parameter, and with tr. As shown in Figure 1, elliptical confidence regions are setted by different values that flexibilize the cluster membership. This flexibility is reached without affecting the important parameter estimations. Notice that assuming all the in the cluster as unknown deterministic parameters is equivalent to assume the following pdf for :

(7)

where is the number of elements in cluster .

Figure 1: Confidence regions for different values.

2.1 Parameter estimation for one distribution

To have the flavour of the general method, let us first assume that all data points come from only one compound-Gaussian model. In order to fully characterize the distribution, one would have to fix the parameters and and determine , the pdf of . Our approach of considering the

’s as unknown parameters (and hence not dealing with a non-parametric model) comes from the fact that it is actually difficult (and possibly not necessary) to estimate the distribution of

if we assume noise and heavy tails. In the sequel, we hence consider one deterministic for each data point as for instance in [Fred]. In consequence, we need to estimate , and for all with all the samples of this distribution.

If we consider this approximated distribution then with . It is well-known (see e.g. [Anderson84]) that the Maximum-Likelihood estimators for once is fixed, the sample covariance estimator is the maximum likelihood estimator for and are given by

Notice that in such a case, i.e., omitting the particular structure of w.r.t. the ’s, does not depend on index (summation is done over all ’s).

However, as the parameter of interest is instead of , the authors of [Fred] adopt a two-step strategy to alternatively maximize the likelihood function w.r.t. to each parameter. In a first step, one estimates , then each given and , then one estimates given and each . The second step consists in plug-in the resulting estimates. Hereafter are the solutions of the first-step :

(8)
(9)
(10)

Finally, the estimators are given as the solutions of the following equations:

(11)
(12)
(13)

align

This equations system, and particularly the fixed-point equations for estimating and , are solved with an initial value and iterating until convergence as showed and discussed in [Fred, frontera2014hyperspectral].

2.2 Parameter estimation for the mixture model

For the clustering task, we start with compound-Gaussian distributions, but, importantly, the specific distribution for each cluster is not perfectly known, since the pdf of the ’s is not specified. Similarly to the EM algorithm, we extend the model with discrete variables (with ), that are not observed, representing the cluster for each observation . We compute the label for each observation and cluster in the E-step, while in the M-step we estimate . Given , the expectation of the extended likelihood of is derived from (1) as follows

where, with and .

Proposition 1.

Given the data points and the extended likelihood of the model as in (2.2), the maximization w.r.t. each parameter of the model contained in keeping the rest of the parameters fixed leads to

(14)

for the proportion of each distribution,

(15)

for the mean of each distribution,

(16)

for the covariance matrices and

(17)

for all scale parameters.

Proof.

We proceed by maximizing the likelihood expectation with respect to the different parameters. Note that there is a constraint on the proportions , which forces us to a Lagrange multiplier. Solving the system of equations composed by

(18)

together with the conditions and , we get the expression given in (14).

Taking the derivative with respect to we find

and setting this to zero we obtain

This is exactly as in (15).

Now, in order to estimate we differentiate with respect to and get

Equating this to zero we find (16).

Finally, differentiation with respect to leads to

Then, equating to zero leads to (17).

Finally, we can justify that these critical points are actually local extremes thanks to usual equations derived from the expectation of the Gaussian likelihood.

As follows from the derivation of Proposition 1, the estimator for the parameter is given by

(19)

and the fixed-point equations

(20)

and

(21)

hold for the rest of the estimators with as in (16).

It is important to notice that the derivation of estimators in our model results in usual robust estimators for the position and dispersion parameters of each distribution. More specifically, both can be assimilated to -estimators with a certain function [maronna]. Actually, the scatter matrix estimator is very close to the corresponding Tyler’s -estimator while the location parameter estimator is close to the corresponding Tyler’s -estimator (up to the exponent parameter of (see [tyler1987distribution, frontera2014hyperspectral] for more details). Main differences arise from the mixture model that leads to different weights involved by the different distributions. This approach can be seen as a generalization of Tyler’s -estimators to the mixture case.

Thus, can be written as

while can be written as

This reaffirms the robust character of our proposal.

The general structure of the proposed algorithm is the same as the one of the classical EM algorithm. The main difference between both algorithms relies in the recursive update equations for the parameter estimations. More precisely, based on equations (20) and (21), we consider four slightly different versions of the algorithm depending on two different aspects:

  1. In the first version, the parameter used to compute the estimator is taken to be the one obtained in the same iteration of the fixed-point loop.

  2. In version 2, the -parameter is instead taken to be the one obtained in the previous iteration.

  3. Based on the similarity with -estimators, version 3 proposes an accelerated method where expressions such as in the denominators of the fixed-point equations are replaced by their square.

  4. Finally, version 4 implements the same acceleration procedure on top of the algorithm of version 2.

For concreteness, the complete algorithm in versions 1 and 4 is shown in Algorithm 1.

Input : Data , the number of clusters
Output : Clustering labels
1 Set initial random values ;
2 ;
3 while not convergence do
4      E: Compute for each :
5           
M:
6           For each :
7                Update and compute ;
8                Set and ;           while not convergence do
          
             end           Update and and :
;
9     
10      end while
11     Sample from a multinomial distribution to assign to a cluster;
Algorithm 1 General scheme of the proposed algorithm - Version 1 (left) and Version 4 (right)

In the case of low dimensions, we truncate the value because of numerical problems when cluster points tend to be very close to the mean. That is, if is smaller than a threshold we change its value to a selected threshold.

3 Experimental results

In this section, we present experiments with synthetic and real data. We study the convergence of the fixed point equations and the estimation error in the case of the synthetic data where we know the true parameter values. Additionally, for the real data we compare the clustering results with the ground truth labels for k-means, the classical EM, EM for Student’s

-distributions, HDBSCAN, spectral clustering

[spectral] and our robust algorithm. The comparison between the former three and our algorithm is straight-forward because they all have in common only one main parameter (the number of clusters) that we fix and suppose known in our experiments. In the case of spectral clustering, it is necessary to tune an extra parameter in order to build the neighborhood graph. We set the number of neighbors to be considered in the graph as the number that maximizes the silhouette score [silhouettes]. A fair comparison with HDBSCAN is even more difficult to set and we evaluated this algorithm by sweeping a grid of selected values. We then quantify the differences of performance by using the usual metrics for the clustering task known as adjusted mutual information index (AMI) and adjusted rand index (AR). In some cases, we visualize the 2D embedding of the data obtained by the UMAP [UMAP] algorithm colored with the resulting labels of the different clustering algorithms. This dimensional reduction algorithm has the same goal than t-SNE [tSNE] but every decision in the design is justified by topological theory.

3.1 Synthetic data

In order to compare the clustering performance of the different algorithms we simulate settings with different distributions for and different dimensions . We repeat the experiment times and collect the mean values shown in tables 1 and 2.

The plots in figure 2 show the convergence of the fixed-point equations for the estimation of and for the different versions of the algorithm. As one can see in both cases, the convergence is reached in all versions after approximately twenty iterations of the fixed-point loop. No significant differences arise.

Figure 2: Convergence of the fixed-point equations for the estimation of (left) and (right).

Furthermore, we studied if the estimations were indeed accurate. We simulated different scenarios to study the error of estimation. We see in Table 1 that the error values are smaller in the case of our flexible and robust algorithm (FR). This can be explained by the robustness of the estimators.

We finally show the mean metrics comparing GMM, Student’s t EM and our algorithm. As seen in the Table 2, even in the Student’s case where the second algorithm is exact, our robust algorithm performs better in average.

distribution algorithm mu error sigma error
student’s t GMM 0.2062 2.1076
student’s t FR 0.1796 0.0443
Table 1: Mean quadratic error comparing GMM and our algorithm (FR) for student’s t mixtures.
distribution algorithm ARI
student’s t GMM 0.9558
student’s t tMM 0.8345
student’s t FR 0.9738
Table 2: ARI means in the case of student’s t mixures.

3.2 Real data

We tested our clustering algorithm in three different real datasets: MNIST [MNIST], small NORB [NORB] and 20newsgroup [20newsgroup]. The MNIST hand-written digits (Figure 3) dataset has become a standard benchmark for classification methods. We apply our flexible and robust model (FR) to discover groups in balanced subsets of similar pairs of digits (3-8 and 1-7) and the set of digits (3-8-6). We additionally contaminated the later subset with a small portion of noise, we randomly added some of the remaining different digits.

Figure 3: Two samples of the pair 3-8 from the hand-written MNIST dataset.

As usual in most application examples in the literature, we applied PCA to work with some meaningful features instead of the original data. To make a trade-off between explained variance and curse of dimensionality effects we kept

variables. As can be seen in Tables 3 and 4, we obtain, in most cases, better values for both metrics than those produced by the other partitioning techniques. This can be explained by the increment in flexibility and the smaller impact of outliers in the estimation process. Comparing our algorithm to spectral clustering, ours usually obtain better results. The cases where the metrics are worse correspond to simpler scenarios without noise and with well separated clusters.

Dataset m n k k-means GMM tMM FR spectral
MNIST 3-8 30 1600 2 0.2203 0.4878 0.5520 0.5949 0.5839
MNIST 7-1 30 1600 2 0.7839 0.8414 0.8947 0.8811 0.8852
MNIST 3-8-6 30 1800 3 0.6149 0.7159 0.7847 0.7918 0.8272
MNIST 3-8-6+noise 30 2080 3 0.3622 0.4418 0.4596 0.4664 0.3511
small NORB 30 1400 4 0.0012 0.0476 0.4894 0.4997 0
20newsgroup 100 1400 4 0.2637 0.3526 0.4496 0.5087 0.1665
Table 3: AMI index measuring the performance of k-means, GMM, tMM, spectral and our algorithm (FR) results for variations of the MNIST dataset, small NORB and 20newsgroup.
Dataset m n k k-means GMM tMM FR spectral
MNIST 3-8 30 1600 2 0.2884 0.5716 0.6397 0.6887 0.6866
MNIST 7-1 30 1600 2 0.8486 0.8905 0.9432 0.9360 0.9384
MNIST 3-8-6 30 1800 3 0.6338 0.7332 0.8262 0.8306 0.8542
MNIST 3-8-6+noise 30 2080 3 0.4475 0.4909 0.5296 0.5548 0.3115
small NORB 30 1400 4 0.0015 0.0468 0.4769 0.4778 0
20newsgroup 100 1400 4 0.1883 0.2739 0.4426 0.5445 0.0987
Table 4: AR index measuring the performance of k-means, GMM, tMM, spectral and our algorithm (FR) results for variations of the MNIST dataset, small NORB and 20newsgroup.

We collected the clustering results from the HDBSCAN algorithms fed with a grid of values for its two main parameters. All the computed metrics comparing the results with the ground truth were bad, close to 0. We show the best clustering result of the 3-8 MNIST subset in Figure 4

where a big number of data points is classified as noise by the algorithm. If the metric is computed only in the non-noise labeled data points then the clustering is almost perfect. This behaviour can be explained by the number of dimensions, that seems to be still big for HDBSCAN to deal with.

Additionally, we tried dimensional reduction techniques UMAP and t-SNE prior to the clustering task. All metrics were improved after a carefully tuning of parameters. In this scenario, our method performs similar to the classical GMM because these embedding methods tend to attract outliers and noise to clusters. However, these non-linear visualization approaches are not recommended to extract features before clustering because fictitious effects can appear depending on the parameters choice.

Figure 4: UMAP embedding of the 3-8 pair MNIST subset colored by HDBSCAN labels. Red represents the noise label.

For the NORB dataset (some representatives are shown in Figure 5), k-means, GMM, spectral clustering and UMAP+HDBSCAN do not perform in a satisfactory way since they end-up capturing the luminosity as the main classification aspect. In contrast, -EM and the algorithm presented in this paper outperform them by far, as can be seen in Tables 3 and 4. This is evident from Figure 6, where label-colored two-dimensional embeddings of the data based on the classification produced by the different methods are shown. The effect of extreme light values seems to be faded by the robust parameter estimators.

Figure 5: Four samples of the small NORB dataset from the 4 considered categories. Differences in brightness between the pictures can be appreciated.
Figure 6: UMAP embedding colored with relative labels. On the top, from left to right, the real ground truth labels, the FR clustering labels and the t-MM clustering labels. On the bottom, from left to right, the k-means clustering labels and the GMM clustering labels.

We study the changes after centering each picture so that the mean is equal for all images. In this case, all methods have similar performance. Comparing all metrics from both cases, our metrics from the data without centering are the best.

Finally, the 20newsgroup dataset is roughly a bag of words constructed from a corpus of news classified by topic modelling into twenty groups. Once again, we compare the performance of our methods with that of k-means, EM and t-EM algorithms. The corresponding results are also presented in Tables 3 and 4. One can see that k-means and spectral clustering perform poorly, and we obtain remarkably better results also with respect to EM and t-EM.

4 Conclusions

In this paper we presented a robust clustering algorithm that models the data distribution as a mixture of compound-Gaussians. It is based on an approximation of the scale random variable as deterministic parameters. The flexibility of this model makes it particularly suited for analyzing heavy-tailed and/or noise-contaminated data.

More precisely, we introduced four somewhat similar versions of the algorithm that differ only on the iterative resolution of the fixed-point system of equations.

We applied our algorithm to cluster both synthetic and real data. The former are mainly Gaussian and t-distributed randomly generated data. For the latter we considered three well-known datasets, namely the so-called MNIST and NORB images sets, and also the 20newsgroups bag of words dataset. In all cases convergence was reached.

For the simulated data we obtained accurate estimations and good classifications rates. Of course, the best model is the one that coincides with the distribution of the data, e.g. when the mixture is actually Gaussian GMM outperforms all other methods, including ours.

For the real dataset that we considered, Tables 3 and 4 show that our methods give better results in all cases compared to partition algorithms k-means, GMM and t-EM. A partial analysis on why this happens was presented.

References