Weighted likelihood mixture modeling and model based clustering

11/08/2018 ∙ by Luca Greco, et al. ∙ Unisannio Università di Trento 0

A weighted likelihood approach for robust fitting of a mixture of multivariate Gaussian components is developed in this work. Two approaches have been proposed that are driven by a suitable modification of the standard EM and CEM algorithms, respectively. In both techniques, the M-step is enhanced by the computation of weights aimed at downweighting outliers. The weights are based on Pearson residuals stemming from robust Mahalanobis-type distances. Formal rules for robust clustering and outlier detection can be also defined based on the fitted mixture model. The behavior of the proposed methodologies has been investigated by some numerical studies and real data examples in terms of both fitting and classification accuracy and outlier detection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multivariate normal mixture models represent a very popular tool for both density estimation and clustering

(McLachlan and Peel, 2004). The parameters of a mixture model are commonly estimated by maximum likelihood by resorting to the EM algorithm (Dempster et al., 1977). Let be a random sample of size . The mixture likelihood can be expressed as

(1)

where , is the -dimensional multivariate normal density,

denotes the vector of prior membership probabilities and

are the mean vector and variance-covariance matrix of the

component, respectively. Rather than using the likelihood in (1), the EM algorithm works with the complete likelihood function

(2)

where is an indicator of the unit belonging to the component. The EM algorithm iteratively alternates between two steps: expectation (E) and maximization (M). In the E-step, the posterior expectation of (2) is evaluated by setting

equal to the posterior probability that

belongs to the component, i.e.

whereas at the M step , and are estimated conditionally on .

An alternative strategy is given by the penalized Classification EM (CEM) algorithm (Symon, 1977; Bryant, 1991; Celeux and Govaert, 1993): the substantial difference is that the E-step is followed by a C-step (where C stands for classification) in which is estimated as either or , meaning that each unit is assigned to the most likely component, conditionally on the current parameters’ values, i.e. , and for . The classification approach is aimed at maximizing the corresponding classification likelihood (2) over both the mixture parameters and the individual components’ labels. In the case , then the standard CEM algorithm is recovered. A detailed comparison of the EM and CEM algorithms can be found in Celeux and Govaert (1993).

When the sample data are prone to contamination and several unexpected outliers occur with respect to (w.r.t.) the assumed mixture model, maximum likelihood is likely to lead to unrealistic estimates and to fail in recovering the underlying clustering structure of the data (see Farcomeni and Greco (2015a) for a recent account). In the presence of noisy data that depart from the underlying mixture model, there is the need to replace maximum likelihood with a suitable robust procedure, leading to estimates and clustering rules that are not badly affected by contamination.

The need for robust tools in the estimation of mixture models has been first addressed in Campbell (1984), who suggested to replace standard maximum likelihood with M-estimation. In a more general fashion, Farcomeni and Greco (2015b)

proposed to resort to multivariate S-estimation of location and scatter in the M-step. Actually, the authors focused their attention on Hidden Markov Models but their approach can be adapted from dynamic to static finite mixtures. According to such strategies, each data point is attached a weight lying in

(a strategy commonly addressed as soft trimming). An alternative approach to robust fitting and clustering is based on hard trimming procedures, i.e. a crispy weight is attached to each observation: atypical observations are expected to be trimmed and the model is fitted by using a subset of the original data. The tclust methodology (Garcia-Escudero et al., 2008; Fritz et al., 2013) is particularly appealing. A very recent extension has been discussed in Dotto et al. (2016), who proposed a reweighted trimming procedure named rtclust. Both the tclust and rtclust are based on penalized CEM algorithms modified by a trimming strategy: at each iteration the most distant data points, with distances computed conditionally on the current cluster assignments, are discarded and a trimmed classification likelihood is maximized. A related proposal has been presented in Neykov et al. (2007) based on the so called trimmed likelihood methodology. Furthermore, it is worth to mention that mixture model estimation and clustering can be also implemented by using the adaptive hard trimming strategy characterizing the Forward Search (Atkinson et al., 2013).

There are also different proposals aimed at being robust that are not based on soft or hard trimming procedures. Some of them are characterized by the use of flexible components in the mixture. The idea is that of embedding the Gaussian mixture in a supermodel: McLachlan and Peel (2004)

introduced a mixture of Student’s t distributions, a mixture of skewed Student’s t distributions has been proposed in

Lin (2010) and Lee and McLachlan (2014), whereas Fraley and Raftery (1998, 2002) considered an additional component modeled as a Poisson process to handle noisy data (the method is available from package mclust in R (Fraley et al., 2012). A robust approach, named otrimle, has been proposed recently by Coretto and Hennig (2016), who considered the addition of an improper uniform mixture component to accomodate outliers.

We propose a robust version of both the EM and the penalized CEM algorithms to fit a mixture of multivariate Gaussian components based on soft trimming, in which weights are evaluated according to the weighted likelihood methodology (Markatou et al., 1998). A first attempt in this direction has been pursued by Markatou (2000). Here, that approach has been developed further and made more general leading to a newly established technique, in which weights are based on the recent results stated in Agostinelli and Greco (2018)

. The methodology leads to a robust fit and is aimed at providing both cluster assignment of genuine data points and outlier detection rules. Data points flagged as anomalous are not meant to be classified into any of the clusters. Furthermore, a relevant aspect of our proposal is represented by the introduction of constraints, not considered in

Markatou (2000), aimed at avoiding local or spurious solutions (Fritz et al., 2013).

Some necessary preliminaries on weighted likelihood estimation are given in Section 2. The weighted EM and penalized CEM algorithms are introduced in Section 3 and classification and outlier detection rules are outlined. Some illustrative example based on simulated data are given in Section 4, whereas Section 5 is devoted to model selection. Numerical studies are presented in Section 6 and real data examples are discussed in Section 7.

2 Background

Let us assume a mixture model composed by heterogeneous multivariate Gaussian components, where is fixed in advance, with density function denoted by . Markatou (2000) suggested to work with the following weighted likelihood estimating equation (WLEE) in the M-step of the EM algorithm:

(3)

We notice that maximum likelihood estimates are replaced by weighted estimates. The weights are defined as

(4)

where denotes the positive part, is the Pearson residual function and is the Residual Adjustment Function (RAF, Basu and Lindsay (1994)). The Pearson residual gives a measure of the agreement between the assumed model and the data, that are summarized by a non-parametric density estimate , based on a kernel indexed by a bandwidth , that is

(5)

with . In the construction of Pearson residuals, Markatou (2000) suggested to use a smoothed model density in the continuous case, by using the same kernel involved in non-parametric density estimation (see Basu and Lindsay (1994); Markatou et al. (1998) for general results), i.e.

When the model is correctly specified, the Pearson residual function (5) evaluated at the true parameter value converges almost surely to zero, whereas, otherwise, for each value of the parameters, large Pearson residuals detect regions where the observation is unlikely to occur under the assumed model. The weight function (4) can be choosen to be unimodal so that it declines smoothly as the residual departs from zero. Hence, those observations lying in such regions are attached a weight that decreases with increasing Pearson residual. Large Pearson residuals and small weights will correspond to data points that are likely to be outliers. The RAF plays the role to bound the effect of large residuals on the fitting procedure, as well as the Huber and Tukey-bisquare function bound large distances in M-type estimation and we assume is such that . Here, we consider the families of RAF based on the Power Divergence Measure

Special cases are maximum likelihood (, as the weights become all equal to one), Hellinger distance (

), Kullback–Leibler divergence (

) and Neyman’s Chi–Square (). Another example is given by the Generalized Kullback-Leibler divergence (GKL) defined as

Maximum likelihood is a special case when and Kullback–Leibler divergence is obtained for .

The shape of the kernel function has a very limited effect on weighted likelihood estimation. On the contrary, the smoothing parameter directly affects the robustness/efficiency trade-off of the methodology in finite samples. Actually, large values of

lead to Pearson residuals all close to zero and weights all close to one and, hence, large efficiency, since the kernel density estimate is stochastically close to the postulated model. On the other hand, small values of

make the kernel density estimate more sensitive to the occurrence of outliers and the Pearson residuals become large for those data points that are in disagreement with the model. In other words, in finite samples more smoothing will lead to higher efficiency but larger bias under contamination.

3 Weighted likelihood mixture modeling

The approach proposed by Markatou (2000) exhibits the same limitations that have been highlighted in Agostinelli and Greco (2018)

in the case of weighted likelihood estimation of multivariate location and scatter. The main drawbacks are driven by the employ of multivariate kernels. Actually, in the multivariate setting the computation of weights becomes troublesome because multivariate non-parametric density estimation is prone to the curse of dimensionality and the selection of

becomes problematic, as well. The novel technique proposed in Agostinelli and Greco (2018) is based on the Mahalanobis distances

Then, Pearson residuals and weights are obtained by comparing a univariate kernel density estimate based on squared distances and their underlying (asymptotic) distribution at the assumed multivariate normal model, rather than working with multivariate data and multivariate kernel density estimates, i.e

(6)

where

is an unbiased at the boundary univariate kernel density estimate over and denotes the density function. It is worth noting that Pearson residuals can be evaluated w.r.t. the original density, so avoiding model smoothing (see also Kuchibhotla and Basu (2015, 2018)).

By exploiting the approach developed in Agostinelli and Greco (2018), we propose a weighted EM algorithm and penalized CEM algorithm whose M-steps are characterized by soft trimming procedures based on the Pearson residuals introduced in (6).

The Weighted EM algorithm (WEM) is structured as follows:

  1. Initialization:

    An appealing starting solution is given by a very robust method such as tclust based on a large rate of trimming but other robust candidate solutions can be used to initialize the algorithm. In order to avoid the algorithm to be dependent on initial values, a simple and common strategy is to run the algorithm from a number (say, 20 to 50) of starting values. For instance, different initial solutions can be obtained by randomly perturbing the deterministic starting solution and/or the final one obtained from it (Farcomeni and Greco, 2015b). Details on how to choose the best solution will be given at the end of the Section.

  2. E step: the standard E step is left unchanged, with

  3. Weighted M step: based on current parameter estimates,

    1. Soft-trimming: let us evaluate component-wise Mahalanobis-type distances

      Then, for each group, compute Pearson residuals and weights as

      and

      respectively.

    2. Update membership probabilities and component specific parameter estimates:

    3. Set  .

It is worth noting that at the M-step it is proposed to solve the following WLEE

(7)

that is characterized by the evaluation of component-wise sets of weights, rather than one weight for each observation, as in equation (3).

The Weighted penalized CEM algorithm (WCEM) is obtained by introducing a standard C-step between the E-step and the weighted M-step. The main feature of the WCEM algorithm is that one single weight is attached to each unit, based on its current assignment after the C-step, rather than component-wise weights. Then, the resulting WLEE shows the same structure as in (3) but with the difference that or .The WCEM is described as follows:

  1. Initialization:

  2. E step:

  3. C step: let identify the cluster assignment for the unit at the iteration. Then

  4. Weighted M step: based on current parameter estimates and cluster assignments ,

    1. Soft-trimming: evaluate the Mahalanobis-type distances of each point w.r.t. the component it belongs in

      Then, compute the corresponding Pearson residuals and weights as

      and

      respectively, where

      Hence, component-wise kernel density estimates only involve distances conditionally on cluster assignment.

    2. Update membership probabilities and component specific parameter estimates:

    3. Set

It is worth noting that both the weighted algorithms returns weighted estimates of covariance. The final output can be suitably modified in order to provide unbiased weighted estimates.

A strategy to select the best root is given in Agostinelli and Greco (2018). The selected root is that with the lowest fitted probability

(8)

It is worth to stress that Pearson residuals involved in (8) have to be evaluated conditionally on cluster assignment based on the original multivariate data for purely selection purposes. The sum of the weights also provides some guidance for root selection. Actually, if than the WLE is close to the MLE, whereas if is too small, than the corresponding WLE is a degenerate solution, indicating that it only represents a small subset of the data. The fitted weights are evaluated w.r.t. the final cluster assignment, as well.

3.1 Eigen-ratio constraint

It is well known that maximization of the mixture likelihood (1) or the classification likelihood (2) is an ill-posed problem since the objective function may be unbounded (Day, 1969; Maronna and Jacovkis, 1974). Therefore, in order to avoid such problems, the optimization is performed under suitable constraints. In particular, we employed the eigen-ratio constraint defined as

(9)

where denoted the eigenvalue of the covariance matrix and is a fixed constant not smaller than one aimed at tuning the strength of the constraint. For spherical clusters are imposed, while as increases varying shaped clusters are allowed. The eigen-ratio constraint (9) can be satisfied at each iteration by adjusting the eigenvalues of each . This is achieved by replacing them with a truncated version

where is an unknown bound depending on . The reader is pointed to Fritz et al. (2013); Garcia-Escudero et al. (2015) for a feasible solution to the problem of finding .

3.2 The selection of

The selection of is a crucial task. Let be the average of the weights at the true parameter value. Then, the expected value of tells us about the expected rate of contamination in the data. In the case of a normal kernel, Markatou (2000) suggested to tune for a fixed (approximated) expected downweighting level. According to authors’ experience (see Agostinelli and Greco (2018), Agostinelli and Greco (2017) and Greco (2017), for instance), but also as already suggested by Markatou et al. (1998), a safe selection of can be achieved by monitoring the empirical downweighting level as varies, with , where the weights at convergence are evaluated at the fitted parameter value, conditionally on the final cluster assignment. The monitoring of WLE analyses has been applied successfully in Agostinelli and Greco (2017) to the case of robust estimation of multivariate location and scatter. The reader is pointed to the paper by Cerioli et al. (2017) for an account on the benefits of monitoring.

The tuning of the smoothing parameter could be based on several quantities of interest stemming from the fitted mixture model. For instance, one could monitor the weighted log-likelihood at convergence or unit-specific robust distances conditionally on the final cluster assignment, rather than the empirical downweighting level. A good strategy would be to monitor different quantities, for a varying number of clusters. For instance, if an abrupt change in the monitored empirical downweighting level or in the robust distances is observed, it may indicate the transition from a robust to a non robust fit and aid in the selection of a value of that gives an appropriate compromise between efficiency and robustness. It is worth to note that, the trimming level in tclust or the improper density constant in otrimle are selected in a monitoring fashion, as well.

3.3 Classification and outlier detection

The WCEM automatically provides a classification of the sample units, since the value of at convergence is either zero or one. With the WEM, by paralleling a common approach, a Maximum-A-Posteriori criterion can be used for cluster assignment, that is a C-step is applied after the last E-step. Such criteria lead to classify all the observations, both genuine and contaminated data, meaning that also outliers are assigned to a cluster. Actually, we are not interested in classifying outliers and for purely clustering purposes outliers have to be discarded.

We distinguish two main approaches to outlier detection. According to the first, outlier detection should be based on the robust fitted model and performed separately by using formal rules. The key ingredients in multivariate outlier detection are the robust distances (Rousseeuw and Van Zomeren, 1990; Cerioli, 2010). The reader is pointed to Cerioli and Farcomeni (2011) for a recent account on outlier detection. An observation is flagged as an outlier when its squared robust distance exceeds a fixed threshold, corresponding to the

level quantile of the reference (asymptotic) distribution of the squared robust distances. A common solution is represented by the use of the

and popular choices are and . In the case of finite mixtures, the main idea is that the outlyingness of each data point should be measured conditionally on the final assignment. Hence, according to a proper testing strategy, an observation is declared as outlying when

(10)

The second approach stems from hard trimming procedures, such as tclust, rtclust and otrimle. These techniques are not meant to provide simultaneous robust fit and outlier detection based on formal testing rules, but outliers are identified with those data points falling in the trimmed set or assigned to the improper density component, respectively. Therefore, by paralleling what happens with hard trimming, one could flag as outliers those data points whose weight, conditionally on the final cluster assignment, is below a fixed (small) threshold. Values as 0.10 or 0.20 seem reasonable choices. Furthermore, the empirical downweighting level represents a natural upper bound for the cut-off value, that would give an indication of the largest tolerable swamping and of the minimum feasible masking for the given level of smoothing. This approach is motivated by the fact that the multivariate WLE shares important features with hard trimming procedures, even if it is based on soft trimming, as claimed in Agostinelli and Greco (2017).

The process of outlier detection may result in type-I and type-II errors. In the former case, a genuine observation is wrongly flagged as outlier (swamping), in the latter case, a true outlier is not identified (masking). Swamped genuine observations are false positives, whereas masked outliers are false negatives. According to the first strategy, the larger

the more swamping and the less masking. In a similar fashion, the higher the threshold the more swamping and the less masking will characterize the second approach to outlier detection.

In the following, both approaches to outlier detection will be taken into account and critically compared.

4 Illustrative examples

4.1 Simulated data

Let us consider a three component mixture model with , , , and

and a simulated sample of size , with of background noise. Outliers have been generated uniformly within an hypercube whose dimensions include the range of the data and are such that the distance to the closest component is larger than the -level quantile of a distribution. WEM and WCEM have been run by setting the eigen-ratio restriction constant to (the true value is 9.5). The weights are based on the Generalized Kullback-Libler divergence and a folded normal kernel. The smoothing parameter has been selected by monitoring the empirical downweighting level and unit specific clustering-conditioned distances over a grid of values (Agostinelli and Greco, 2017). Figure 1 displays the monitoring analyses of the empirical downweighting level and the robust distances for the WEM. In both panels an abrupt change is detected, meaning that for values on the right side of the vertical line the procedure is no more able to identify the outliers, hence being not robust w.r.t. the presence of contamination. Similar trajectories are observed for the WCEM. A color map has been used that goes from light gray to dark gray in order to highlight those trajectories corresponding to observations that are flagged as outlying for most of the monitoring. Figures 2 displays the result of applying both the WEM and WCEM algorithm to the sample at hand with an outlier detection rule based on the 0.99-level quantile of the distribution and on a threshold for weights set at 0.2. Component-specific tolerance ellipses are based on the 0.95-level quantile of the distribution. We notice that both methods succeed in recovering the underlying structure of the clean data and that the outlier detection rules provide quite similar and satisfactory outcomes. The entries in Table 1 gives the rate of detected outliers , swamping and masking stemming from the alternative strategies.

swamp. mask.
0.388 0.013 0.050
0.408 0.027 0.020
WEM 0.353 0.003 0.123
0.386 0.013 0.055
0.429 0.052 0.005
0.378 0.017 0.080
0.408 0.027 0.020
WCEM 0.333 0.003 0.173
0.366 0.017 0.105
0.417 0.038 0.015
Table 1: Simulated data. Outlier detection from different rules for WEM and WCEM.
Figure 1: Simulated data. Monitoring the empirical downweighting level (left) and robust distances (right) based on WEM. The vertical lines give the selected . The horizontal line gives the quantile.
Figure 2: Simulated data. Fitted components, cluster assignments and outlier detection by WEM (left) and WCEM (right). Top row: outlier detection based on . Bottom row: outlier detection based on tolerance ellipses overimposed.

5 Model selection

In model based clustering, formal approaches to choose the number of components are based on the value of the log-likelihood function at convergence. Criteria such as the BIC or the AIC are commonly used to select when running the classical EM algorithm. In a robust setting, in tclust the number of clusters is chosen by monitoring the changes in the trimmed classification likelihood over different values of and contamination levels. A formal approach has not been investigated yet in the case of the otrimle, even if the authors conjectures that a monitoring approach or the development of information criteria can be pursued as well.

Here, when the robust fit is achieved by the WEM algorithm, we suggest to resort to a weighted counterparts of the classical AIC or BIC criteria, according to the results stated in Agostinelli (2002). Then, the proposed strategy is based on minimizing

(11)

where and is a penalty term reflecting model complexity.

Figure 3 displays the behavior of the weighted BIC for the set of simulated data of Section 4, for different choices of over a grid of values for the smoothing parameter . A similar trajectory is observed for both and . The abrupt change is detected at close values of but the choice is preferred since it leads to a smaller weighted BIC before the robust fit turns into a non robust one.

Figure 3: Example 1. Monitoring the weighted BIC. The vertical line gives the selected .

Furthermore, for what concerns the WCEM algorithm, one could mimic the approach used in tclust and monitor the weighted conditional likelihood at convergence for varying and . Then, the number of clusters should be set equal to the minimum for which there is no substantial improvement in the objective function when adding one group more.

It can be proved that the robust criterion in (11) is asymptotically equivalent to its classical counterparts at the assumed model, i.e. when the data are not prone to contamination. The proof is based on some regularity conditions about the kernel and the model that are required to assess the asymptotic behavior of the WLE (Agostinelli, 2002; Agostinelli and Greco, 2013, 2018). In the case of finite mixture models, it is assumed further that an ideal clustering setting holds under the postulated mixture model, that is data are assumed to be well clustered. The following Theorem holds.

Theorem. Let be the set of points belonging to the component, whose cardinality is . The full data is defined as with and . Assume that (i) the model is correctly specified, (ii) the WLE is a consistent estimator of , (iii) . Then, .
Proof. Let denote the maximum likelihood estimate.

6 Numerical studies

In this section, we investigate the finite sample behavior of the proposed WEM and WCEM algorithms. Both algorithms are still based on non optimized R code. Nevertheless, the results that follow are satisfactory and computational time always lays in a feasible range. We set , and simulate data according to the M5 scheme as introduced in Garcia-Escudero et al. (2008). Clusters have been generated by

-variate Gaussian distributions with parameters

and

where is a null row vector of dimension and is the identity matrix. Dimensions have been taken into account. The parameter regulates the degree of overlapping among clusters: smaller values yield severe overlapping whereas larger values give a better separation. Here, we set . Theoretical cluster weights are fixed as . Outliers have been generated uniformly within an hypercube whose dimensions include the range of the data and are such that the distance to the closest component is larger than the -level quantile of a distribution. The rate of contamination has been set to . The case has been used to evaluate the efficiency of the proposed techniques when applied to clean data. The numerical studies are based on Monte Carlo trials. The weighted likelihood algorithms are both based on a folded normal kernel and a GKL RAF (with ), whereas we set as eigen-ratio constraint. The smoothing parameter has been selected in such a way that the empirical downweighting level lies in the range under contamination, whereas it is about when no outliers occur. Fitting accuracy has been evaluated according to the following measures:

  1. , where and are matrices with and in each row, respectively, for ;

  2. , where denotes the condition number of the matrix ;

  3. .

For what concerns the task of outlier detection, several strategies have been compared: we considered a detection rule based on the 0.99-level quantile of the distribution, according to (10), but also based on the fitted weights, with thresholds set at , and . For each decision rule, for the contaminated scenario, we report (a) the rate of detected outliers ; (b) the swamping rate; (c) the masking rate. The first is a measure of the fitted contamination level, whereas the others give insights on the level and power of the outlier detection procedure. We notice that comparisons across the different methods should be considered the more fair the closer are the values of . For , swamping only is taken into account.

Classification accuracy has been measured by (i) the Rand index and (ii) the misclassification error rate (MCE), both evaluated over true negatives for the robust techniques. In order to avoid problems due to label switching issues, cluster labels have been sorted according to the first entry of the fitted location vectors.

Under the assumed model, WEM and WCEM have been initialized by tclust with of trimming and their behavior have been compared with the EM and CEM algorithms and the otrimle, for the same eigen ratio constraint and the same initial values. In the presence of contamination, we do not report the results concerning the non robust EM and CEM but only those regarding the WEM, WCEM, otrimle and oracle tclust, i.e. with trimming level equal to the actual contamination level (tclust10 and tclust20). In this case, starting values have been driven by tclust with of trimming.

It is worth to stress, here, that the comparison in terms of outlier detection reliability between weighted likelihood estimation, tclust and otrimle can be considered fair only by looking at the rate of weights below the fixed threshold for the former methodology and trimmed observation or those assigned to the improper density group for the latter techniques, since formal testing rules have not ben considered neither for tclust or otrimle.

First, let us consider the behavior of WEM and WCEM at the assumed model. The entries in Table 2 give the considered average measures of fitting accuracy; Table 3 gives the level of swamping according to the different strategies for WEM and WCEM, that are based on the distribution and the inspection of weights; Table 4 reports classification accuracy. The overall behavior of WEM and WCEM is appreciable: we observe a tolerable efficiency loss, a negligible swamping effect and a reliable classification accuracy, indeed, as compared with the non robust procedures. Furthermore, the results are quite similar to those stemming from otrimle and quite often the inspection of the weights from WEM and WCEM leads to a smaller number of false positives, on average.

The performance of WEM and WCEM under contamination is explored next. The fitting accuracy provided by the proposed weighted likelihood based strategies is illustrated in Tables 5, 6, 7. In all considered scenarios, the behavior of WEM and WCEM is satisfactory and they both compare well with the oracle tclust and otrimle. In particular, the good performance of WEM and WCEM has to be remarked in the challenging situation of severe overlapping. Furthermore, for all data configurations, we notice the ability of WEM to combine accurate estimates of component-specific parameters with those of the cluster weights. The entries in Tables 8, 9, 10 show the behavior of the testing procedure based on the distribution and the inspection of weights for all considered scenarios. The empirical level of contamination is always larger than the nominal one but it is acceptable and stable as and change. Masking is always negligible hence highlighting the appreciable power of the testing procedure. We remark that one could also consider multiple testing adjustments in outlier detection as outlined in Cerioli and Farcomeni (2011). To conclude the analysis, Tables 11, 12, 13 give the considered measures of classification accuracy as varies. The results are quite stable across the four methods and all dimensions. As well as before, WEM and WCEM lead to a satisfactory classification, even in the challenging case of severe overlapping.

7 Real data examples

7.1 Swiss bank note data

Let us consider the well known Swiss banknote dataset concerning measurements of old Swiss 1000-franc banknotes, half of which are counterfeit. The weighted likelihood strategy is based on a gamma kernel and a symmetric Chi-square RAF. Our first task is to choose the number of clusters. To this end we look at the weighted BIC (11) on a fixed grid of values for and a restriction factor . The inspection of Figure 4 clearly suggests a two groups structure for all considered values of the smoothing parameter . The empirical downweighting level is fairly stable for a wide range of values. We decided to set leading to an empirical downweighting level equal to . The WEM algorithm based on the testing rule (10) with leads to identify 21 outliers, that include 15 forged and 6 genuine bills. On the contrary, there are 19 data points whose weight is lower than , that include 14 forged and 5 genuine bills.The cluster assignments stemming from the latter approach is displayed in Figure 5. It is worth to note that the outlying forged bills coincide with the group that has been recognized to follow a different forgery pattern and characterized by a peculiar length of the diagonal (see García-Escudero et al. (2011); Dotto et al. (2016) and references therein). On the other hand, the outlying genuine bills all exhibit some extreme measures. For the same value of the eigen-ratio constraint, the otrimle assignes 19 bills to the improper component density, leading to the same classification of the WEM, whereas rtclust includes in the trimming set one counterfeit bill more. A visual comparison between the three results is possible from Figure 6, whose panels show a scatterplot of the fourth against the sixth variable with the classification resulting from WEM (with both outlier detection rules), rtclust and otrimle, respectively. The WCEM has been tuned to achieve the same empirical downweighting level and leads to the same results.

Figure 4: Swiss banknote data. Monitoring the weighted BIC for WEM, .
Figure 5: Swiss banknote data. Cluster assignments by WEM. Observation whose weight is lower than are considered outliers. Genuine bills are denoted by a green , forged bills by a red . Outliers are denoted by a black filled circle.
Figure 6: Swiss banknote data. Fourth against the sixth variable with cluster assignments by WEM (, WEM (, rtclust and otrimle in clockwise fashion. Genuine bills are denoted by a green , forged bills by a red . Outliers are denoted by a black filled circle.

7.2 2018 World Happiness Report data

In this section the weighted likelihood methodology is applied to a dataset from the 2018 World Happiness Report by the United Nations Sustainable Development Solutions Network (Helliwell et al., 2018) (hereafter denoted by WHR18). The data give measures about six key variables used to explain the variation of subjective well-being across countries: per capita Gross Domestic Product (on log scale), Social Support, i.e. the national average of the binary responses to the Gallup World Poll (GWP) question If you were in trouble, do you have relatives or friends you can count on to help you whenever you need them, or not?, Health Life Expectancy at birth, Freedom to make life choices, i.e. the national average of binary responses to the GWP question Are you satisfied or not with your freedom to choose what you do with your life?, Generosity, measured by the residual of regressing the national average of GWP responses to the question Have you donated money to a charity in the past month? on GDP per capita, perception of Corruption, i.e. the national average of binary responses to the GWP questions Is corruption widespread throughout the government or not? and Is corruption widespread within businesses or not?. The dataset is made of 142 rows, after the removal of some countries characterized by missing values. The objective is to obtain groups of countries with a similar behavior, to identify possible countries with anomalous and unexpected traits and to highlight those features that are the main source of separation among clusters.

In this example, a GKL RAF has been chosen, the unbiased at the boundary kernel density estimate has been obtain by first evaluating a kernel density estimate on the log-transformed squared distance over the whole real line and then back-transforming the fitted density to (Agostinelli and Greco, 2018), and we set .

In order to select the number of clusters, we monitored the weighted BIC stemming from WEM and the Classification log-Likelihood at convergence from WCEM for different values of and . The corresponding monitoring plots are given in Figure 7, respectively. Based on the WEM algorithm, is to be preferred, even if the gap with the case is very small for all considered values of the smoothing parameter. On the other hand, the inspection of the weighted Classification log-Likelihood driven by the WCEM suggests . Therefore, we have applied our WEM and WCEM algorithms both based on and . As with two groups are not very separated, we preferred and reported only those results for reasons of space. Moreover, the results stemming from WEM and WCEM were very similar both in terms of fitted parameters, cluster assignments and detected outliers. Then, in the following we only give the results driven by WEM. The empirical downweighting level was found not to depend in a remarkable fashion on the number of groups. In particular, for , in the monitoring process of we did not observe any abrupt change but a smooth decline until a stabilization of the level of contamination occurred. Then, we decided to use a value leading to . Figure 8 displays the distance plot stemming from WEM. According to (10) for a level , 12 outliers are detected. A closer inspection of the plot unveils that some of such points are close to the cut-off value. Therefore, they are not considered as outliers but correctly assigned to the corresponding cluster. Furthermore, we notice that all the points leading to the largest distances are attached a very small weight (). The weight corresponding to Myanmar is about 0.40. The other countries near the threshold line all show weights about equal to 0.80.

The cluster profiles and raw measurements for the detected outlying countries are reported in Table 14. The three clusters are well separated in terms of all of the items considered, even if small differences are seen in terms of perception of Corruption between clusters 1 and 2. Cluster 3 includes all the countries characterized by the highest level of subjective well-being. The differences with the other two clusters concerning GDP, HLE and Corruption are outstanding. On the opposite, in cluster 1 we find all the countries with the hardest economic and social conditions. For what concerns the explanation of the outliers, we notice that the subgroup of African countries composed by Central African Republic, Chad, Ivory Coast, Lesotho, Nigeria and Sierra Leone might belong to group 1 but is characterized by the six lowest HLE values. For what concerns Myanmar, it exhibits extremely large Freedom and Generosity indexes and a surprising small value for Corruption. It should belong to cluster 1 but it is closer to cluster 3 according to the last three measurements, indeed. For instance, it may be supposed that such measurements are not completely reliable because of problems with the questionnaires and the sampling or they may revel a surprising positive attitude despite of the difficult economic and life conditions.

A spatial map of cluster assignments is given in Figure 9, that confirms the goodness and coherence of the results and the ability of the considered six features to find reasonable clusters. Cluster 1 is mainly localized in Africa, cluster 2 is composed by developing countries, whereas cluster 3 includes the world leading countries, among which there are USA, Canada, Australia and the countries in the European Union.

Figure 7: WHR18 data. Monitoring the weighted BIC for WEM (left) and the weighted Classification log-Likelihood for WCEM, .
Figure 8: WHR18 data. Distance plot for WEM with . The horizontal line gives the quantile.
Figure 9: WHR18 data. Spatial classification from WEM with .

8 Conclusions

We have proposed a robust technique for fitting a finite mixture of multivariate Gaussian components based on recent developments in weighted likelihood estimation. Actually, the proposed methodology is meant to provide a step further with respect to the original proposal in Markatou (2000). The method is based on the idea of using a univariate kernel density estimate based on robust distances rather than a multivariate one based on the data in order to compute weights. Furthermore, the proposed technique is characterized by the introduction of an eigen constraint aimed at avoiding problems connected with an unbounded likelihood or spurious solutions.

Based on the robustly fitted mixture model, a model based clustering strategy can be built in a standard fashion by looking at the value of posterior membership probabilities. At the same time, formal rules for outlier detection can be derived, as well. Then, one could assign units to clusters provided that the corresponding outlyingness test is not significant, that means that detected outliers have to be discarded and not assigned to any group. The numerical studies and the real data examples showed the satisfactory reliability of the proposed methodology.

There is still room for further work, along a path shared with tclust, rtclust and otrimle. Actually the proposed method works for a given smoothing parameter and a fixed number of clusters

. In addition, outlier detection depends upon a fixed threshold. At the moment, the selection of

stemming from the monitoring of several quantities, such as the empirical downweighting level, the unit specific robust distances or even the fitted parameters, provides an acceptable adaptive solution. Such a procedure is not different form the implementation of a sequence of refinement steps of an initial robust partition stemming from a sequence of decreasing values of . The selection of remains a difficult problem to deal with too, despite the satisfactory behavior of the proposed criteria, i.e. the weighted BIC and the weighted Classification log-Likelihood. Outlier detection is a novel aspect in the framework of robust mixture modeling and model based clustering. In the specific context, the outlyingness of each unit is tested conditionally on the final cluster assignment. The number of outliers clearly depends on the chosen level or the selected threshold for the final weights. A fair choice of the level of the test is still an open problem in outlier detection. However, the suggested testing strategies work satisfactory, at least in those considered scenarios, and provide a good compromise between swamping and masking that could be improved further by using multiplicity adjustments (Cerioli and Farcomeni, 2011).

p
2 0.600 0.186 0.022 0.641 0.203 0.023 0.8550 0.248 0.034
WEM 5 0.651 0.802 0.025 0.767 0.822 0.035 0.867 0.886 0.032
10 0.664 1.158 0.022 0.703 1.183 0.024 0.921 1.254 0.036
2 0.614 0.193 0.030 0.687 0.208 0.029 1.273 0.261 0.056
WCEM 5 0.653 0.795 0.033 0.767 0.822 0.035 1.422 0.904 0.064
10 0.681 1.156 0.028 0.829 1.194 0.031 1.715 1.305 0.074
2 0.551 0.154 0.022 0.597 0.169 0.023 0.765 0.184 0.029
EM 5 0.622 0.764 0.025 0.651 0.713 0.025 0.816 0.832 0.029
10 0.659 1.113 0.022 0.697 1.137 0.023 0.933 1.214 0.033
2 0.603 0.160 0.022 0.803 0.182 0.030 1.760 0.274 0.056
CEM 5 0.650 0.770 0.028 0.834 0.790 0.033 1.698 0.874 0.073
10 0.687 1.122 0.022 0.875 1.158 0.030 1.915 1.270 0.081
2 0.594 0.170 0.053 0.622 0.187 0.047 0.773 0.204 0.050
otrimle 5 0.636 0.770 0.028 0.666 0.787 0.031 0.834 0.851 0.039
10 0.660 1.117 0.023 0.699 1.141 0.024 0.914 1.215 0.033
Table 2: Average measures of fitting accuracy for WEM, WCEM, EM, CEM and otrimle with p=2,5,10, , .
p
2 0.024 0.021 0.017
WEM 5 0.018 0.017 0.016
10 0.018 0.017 0.017
2 0.024 0.020 0.020
WCEM 5 0.017 0.016 0.017
10 0.017 0.017 0.017
2 0.007 0.005 0.005
WEM 5 0.004 0.004 0.004
10 0.004 0.004 0.003
2 0.007 0.005 0.005
WCEM 5 0.004 0.003 0.004
10 0.003 0.003 0.003
2 0.016 0.012 0.010
WEM 5 0.010 0.009 0.008
10 0.009 0.008 0.008
2 0.016 0.011 0.011
WCEM 5 0.008 0.008 0.008
10 0.007 0.007 0.008
2 0.038 0.028 0.031
otrimle 5 0.005 0.007 0.014
10 0.019 0.002 0.002
Table 3: Swamping rate for WEM, WCEM and otrimle with p=2,5,10, , .
p Rand MCE Rand MCE Rand MCE
2 0.978 0.007 0.929 0.024 0.829 0.062
WEM 5 0.975 0.008 0.929 0.024 0.831 0.062
10 0.974 0.008 0.924 0.026 0.819 0.067
2 0.979 0.007 0.933 0.023 0.834 0.061
WCEM 5 0.976 0.008 0.930 0.024 0.827 0.064
10 0.974 0.008 0.925 0.026 0.803 0.075
2 0.973 0.008 0.928 0.024 0.834 0.060
EM 5 0.974 0.008 0.928 0.024 0.832 0.062
10 0.972 0.009 0.923 0.026 0.818 0.067
2 0.973 0.009 0.927 0.024 0.817 0.068
CEM 5 0.973 0.009 0.927 0.024 0.817 0.068
10 0.971 0.009 0.920 0.027 0.794 0.079
2 0.975 0.008 0.930 0.024 0.837 0.059
otrimle 5 0.974 0.008 0.930 0.024 0.834 0.061
10 0.973 0.009 0.923 0.026 0.820 0.067
Table 4: Average measures of classification accuracy for WEM, WCEM, EM, CEM and otrimle with p=2,5,10, , .
p
2 0.651 0.219 0.029 0.745 0.249 0.026
WEM 5 0.679 0.886 0.025 0.694 0.894 0.026
10 0.685 1.237 0.023 0.741 1.278 0.027
2 0.629 0.213 0.035 0.761 0.245 0.059
WCEM 5 0.731 0.944 0.045 0.719 1.128 0.045
10 0.732 1.301 0.035 0.780 1.642 0.040
2 0.619 0.219 0.027 0.774 0.261 0.029
tclust 5 0.631 0.796 0.025 0.685 0.834 0.026
10 0.682 1.153 0.023 0.718 1.218 0.028
2 0.638 0.214 0.110 0.744 0.239 0.177
otrimle 5 0.629 0.803 0.068 0.664 0.845 0.124
10 0.668 1.163 0.067 0.704 1.233 0.124
Table 5: Average measures of fitting accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (well separated clusters).
p
2 0.684 0.254 0.029 0.821 0.280 0.030
WEM 5 0.727 0.883 0.027 0.742 0.900 0.028
10 0.750 1.233 0.028 0.801 1.308 0.030
2 0.685 0.230 0.041 1.087 0.285 0.081
WCEM 5 0.806 1.170 0.045 0.826 1.128 0.039
10 0.858 1.616 0.032 0.928 1.774 0.038
2 0.802 0.264 0.034 0.884 0.336 0.039
tclust 5 0.829 0.818 0.033 0.876 0.871 0.033
10 0.860 1.187 0.033 0.909 1.258 0.036
2 0.652 0.238 0.111 0.731 0.276 0.183
otrimle 5 0.684 0.812 0.068 0.719 0.872 0.126
10 0.731 1.192 0.068 0.782 1.261 0.126
Table 6: Average measures of fitting accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (moderate overlapping).
p
2 1.043 0.313 0.038 1.131 0.370 0.056
WEM 5 0.837 0.946 0.032 0.899 0.977 0.035
10 0.990 1.294 0.040 1.115 1.375 0.046
2 1.112 0.344 0.058 1.386 0.362 0.081
WCEM 5 1.121 1.356 0.072 1.465 1.236 0.054
10 1.909 1.756 0.086 1.889 1.951 0.076
2 1.728 0.372 0.082 3.431 0.607 0.133
tclust 5 1.477 0.897 0.069 1.685 0.957 0.072
10 1.761 1.301 0.077 1.898 1.348 0.083
2 0.910 0.293 0.122 0.885 0.295 0.183
otrimle 5 0.781 0.876 0.072 0.861 0.923 0.127
10 0.991 1.259 0.077 1.086 1.320 0.132
Table 7: Average measures of fitting accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (severe overlapping).
p swamp. mask. swamp. mask.
2 0.118 0.021 0.004 0.227 0.034 0.002
WEM 5 0.120 0.022 0.000 0.217 0.021 0.000
10 0.119 0.021 0.000 0.218 0.022 0.000
2 0.118 0.021 0.005 0.223 0.032 0.015
WCEM 5 0.118 0.020 0.000 0.217 0.017 0.000
10 0.115 0.017 0.000 0.213 0.016 0.000
2 0.096 0.005 0.086 0.211 0.017 0.012
WEM 5 0.108 0.009 0.000 0.205 0.006 0.000
10 0.106 0.007 0.000 0.207 0.009 0.000
2 0.092 0.005 0.125 0.201 0.019 0.069
WCEM 5 0.107 0.008 0.000 0.205 0.006 0.001
10 0.105 0.006 0.000 0.206 0.007 0.000
2 0.110 0.013 0.016 0.231 0.039 0.001
WEM 5 0.118 0.020 0.000 0.213 0.016 0.000
10 0.116 0.018 0.000 0.216 0.020 0.000
2 0.108 0.013 0.038 0.227 0.039 0.022
WCEM 5 0.116 0.018 0.000 0.211 0.014 0.000
10 0.114 0.016 0.001 0.213 0.016 0.000
2 0.109 0.012 0.018 0.263 0.078 0.000
WEM 5 0.126 0.031 0.000 0.224 0.031 0.000
10 0.122 0.024 0.000 0.238 0.048 0.000
2 0.109 0.013 0.016 0.263 0.080 0.007
WCEM 5 0.125 0.028 0.000 0.224 0.030 0.000
10 0.120 0.025 0.000 0.232 0.040 0.000
2 0.100 0.005 0.043 0.200 0.008 0.031
tclust 5 0.100 0.000 0.000 0.200 0.000 0.000
10 0.100 0.000 0.000 0.200 0.000 0.000
2 0.135 0.040 0.019 0.254 0.069 0.005
otrimle 5 0.106 0.003 0.007 0.203 0.003 0.004
10 0.102 0.002 0.001 0.202 0.003 0.000
Table 8: Outlier detection for WEM, WCEM, tclust and otrimle with p=2,5,10, , (well separated clusters).
p swamp. mask. swamp. mask.
2 0.120 0.023 0.001 0.218 0.025 0.012
WEM 5 0.126 0.029 0.000 0.216 0.021 0.000
10 0.125 0.028 0.000 0.217 0.021 0.000
2 0.117 0.024 0.048 0.216 0.029 0.038
WCEM 5 0.127 0.031 0.000 0.214 0.017 0.000
10 0.123 0.026 0.000 0.213 0.016 0.000
2 0.096 0.005 0.086 0.202 0.012 0.038
WEM 5 0.108 0.009 0.000 0.205 0.006 0.000
10 0.106 0.007 0.000 0.207 0.009 0.000
2 0.092 0.005 0.125 0.198 0.019 0.085
WCEM 5 0.107 0.008 0.000 0.205 0.006 0.001
10 0.105 0.006 0.000 0.205 0.016 0.000
2 0.110 0.013 0.016 0.221 0.028 0.009
WEM 5 0.118 0.020 0.000 0.212 0.014 0.000
10 0.116 0.018 0.000 0.215 0.019 0.000
2 0.108 0.013 0.038 0.225 0.041 0.042
WCEM 5 0.116 0.018 0.000 0.211 0.013 0.000
10 0.114 0.016 0.001 0.212 0.016 0.000
2 0.109 0.012 0.018 0.250 0.063 0.002
WEM 5 0.126 0.031 0.000 0.223 0.029 0.000
10 0.122 0.024 0.000 0.236 0.046 0.000
2 0.109 0.013 0.016 0.272 0.097 0.025
WCEM 5 0.125 0.028 0.000 0.222 0.028 0.000
10 0.120 0.025 0.000 0.231 0.039 0.000
2 0.100 0.007 0.064 0.200 0.009 0.036
tclust 5 0.100 0.000 0.001 0.200 0.000 0.001
10 0.100 0.000 0.000 0.200 0.000 0.000
2 0.141 0.047 0.005 0.256 0.069 0.000
otrimle 5 0.101 0.001 0.007 0.206 0.008 0.004
10 0.102 0.002 0.001 0.203 0.004 0.000
Table 9: Outlier detection for WEM, WCEM, tclust and otrimle with p=2,5,10, , (moderate overlapping).
p swamp. mask. swamp. mask.
2 0.120 0.023 0.001 0.206 0.020 0.052
WEM 5 0.125 0.028 0.000 0.214 0.018 0.000
10 0.123 0.026 0.000 0.216 0.020 0.000
2 0.145 0.050 0.001 0.253 0.045 0.006
WCEM 5 0.128 0.031 0.000 0.214 0.018 0.000
10 0.124 0.027 0.000 0.213 0.017 0.000
2 0.096 0.005 0.086 0.209 0.020 0.003
WEM 5 0.108 0.009 0.000 0.205 0.006 0.000
10 0.106 0.007 0.000 0.207 0.008 0.000
2 0.092 0.005 0.125 0.237 0.046 0.001
WCEM 5 0.107 0.008 0.000 0.206 0.006 0.001
10 0.105 0.006 0.000 0.206 0.007 0.000
2 0.110 0.013 0.016 0.231 0.039 0.001
WEM 5 0.118 0.020 0.000 0.211 0.013 0.000
10 0.116 0.018 0.000 0.215 0.019 0.000
2 0.108 0.013 0.038 0.227 0.039 0.022
WCEM 5 0.116 0.018 0.000 0.211 0.013 0.001
10 0.114 0.016 0.001 0.213 0.016 0.000
2 0.109 0.012 0.018 0.209 0.020 0.003
WEM 5 0.126 0.031 0.000 0.222 0.027 0.000
10 0.122 0.024 0.000 0.235 0.044 0.000
2 0.109 0.013 0.016 0.237 0.046 0.001
WCEM 5 0.125 0.028 0.000 0.223 0.028 0.000
10 0.120 0.025 0.000 0.232 0.041 0.000
2 0.100 0.007 0.061 0.200 0.024 0.095
tclust 5 0.100 0.000 0.002 0.200 0.000 0.001
10 0.100 0.000 0.000 0.200 0.000 0.000
2 0.145 0.050 0.004 0.251 0.064 0.003
otrimle 5 0.102 0.003 0.006 0.202 0.003 0.006
10 0.102 0.003 0.001 0.203 0.004 0.000
Table 10: Outlier detection for WEM, WCEM, tclust and otrimle with p=2,5,10, , (severe overlapping).
p Rand MCE Rand MCE
2 0.98 0.01 0.98 0.01
WEM 5 0.97 0.01 0.97 0.01
10 0.97 0.01 0.97 0.01
2 0.98 0.01 0.97 0.01
WCEM 5 0.97 0.01 0.97 0.01
10 0.97 0.01 0.97 0.01
2 0.97 0.01 0.97 0.01
tclust 5 0.97 0.01 0.97 0.01
10 0.97 0.01 0.97 0.01
2 0.98 0.01 0.98 0.01
otrimle 5 0.97 0.01 0.97 0.01
10 0.97 0.01 0.97 0.01
Table 11: Average measures of classification accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (well separated clusters).
p Rand MCE Rand MCE
2 0.93 0.02 0.92 0.03
WEM 5 0.92 0.03 0.93 0.03
10 0.92 0.03 0.92 0.03
2 0.93 0.02 0.91 0.03
WCEM 5 0.92 0.03 0.93 0.03
10 0.92 0.03 0.92 0.03
2 0.93 0.02 0.92 0.03
tclust 5 0.92 0.03 0.93 0.03
10 0.92 0.03 0.92 0.03
2 0.93 0.02 0.93 0.02
otrimle 5 0.92 0.03 0.93 0.03
10 0.92 0.03 0.92 0.03
Table 12: Average measures of classification accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (moderate overlapping).
p Rand MCE Rand MCE
2 0.82 0.07 0.79 0.07
WEM 5 0.82 0.06 0.82 0.06
10 0.82 0.07 0.81 0.07
2 0.82 0.06 0.79 0.08
WCEM 5 0.81 0.07 0.81 0.07
10 0.79 0.08 0.79 0.08
2 0.82 0.07 0.76 0.07
tclust 5 0.82 0.07 0.82 0.07
10 0.80 0.08 0.80 0.08
2 0.84 0.06 0.83 0.06
otrimle 5 0.83 0.06 0.83 0.06
10 0.82 0.07 0.81 0.07
Table 13: Average measures of classification accuracy for WEM, WCEM, tclust and otrimle with p=2,5,10, , (severe overlapping).
LogGDP HLE Social support Freedom Generosity Corruption Size
Cluster 1 7.88 53.68 0.69 0.69 0.19 0.78 38
Cluster 2 9.35 64.46 0.83 0.74 0.26 0.80 60
Cluster 3 10.48 71.63 0.90 0.83 0.44 0.61 37
Central African Rep. 6.47 44.31 0.31 0.63 0.17 0.88
Chad 7.55 45.66 0.68 0.53 0.17 0.84
Ivory Coast 8.13 46.52 0.66 0.77 0.16 0.76
Lesotho 7.91 46.48 0.80 0.73 0.10 0.74
Myanmar 8.59 57.51 0.79 0.86 0.90 0.62
Nigeria 8.61 45.50 0.78 0.76 0.28 0.89
Sierra Leone 7.22 43.99 0.64 0.67 0.24 0.85
Table 14: WHR18 data: cluster profiles and raw measurements for the detected outlying countries.

References

  • Agostinelli (2002) Agostinelli C (2002) Robust model selection in regression via weighted likelihood methodology. Statistics & probability letters 56(3):289–300
  • Agostinelli and Greco (2013)

    Agostinelli C, Greco L (2013) A weighted strategy to handle likelihood uncertainty in bayesian inference. Computational Statistics 28(1):319–339

  • Agostinelli and Greco (2017) Agostinelli C, Greco L (2017) Discussion on ”the power of monitoring: How to make the most of a contaminated sample”. Statistical Methods & Applications DOI 10.1007/s10260-017-0416-9
  • Agostinelli and Greco (2018) Agostinelli C, Greco L (2018) Weighted likelihood estimation of multivariate location and scatter. Test DOI https://doi.org/10.1007/s11749-018-0596-0
  • Atkinson et al. (2013) Atkinson A, Riani M, Cerioli A (2013) Exploring multivariate data with the forward search. Springer Science Business Media
  • Basu and Lindsay (1994) Basu A, Lindsay B (1994) Minimum disparity estimation for continuous models: efficiency, distributions and robustness. Annals of the Institute of Statistical Mathematics 46(4):683–705
  • Bryant (1991) Bryant P (1991) Large-sample results for optimization-based clustering methods. Journal of Classification 8(1):31–44
  • Campbell (1984) Campbell N (1984) Mixture models and atypical values. Mathematical Geology 16(5):465–477
  • Celeux and Govaert (1993)

    Celeux G, Govaert G (1993) Comparison of the mixture and the classification maximum likelihood in cluster analysis. Journal of Statistical Computation and simulation 47(3-4):127–146

  • Cerioli (2010) Cerioli A (2010) Multivariate outlier detection with high-breakdown estimators. Journal of the American Statistical Association 105(489):147–156
  • Cerioli and Farcomeni (2011) Cerioli A, Farcomeni A (2011) Error rates for multivariate outlier detection. Computational Statistics & Data Analysis 55(1):544–553
  • Cerioli et al. (2017) Cerioli A, Riani M, Atkinson A, Corbellini A (2017) The power of monitoring: How to make the most of a contaminated sample. Statistical Methods & Applications DOI 10.1007/s10260-017-0409-8
  • Coretto and Hennig (2016) Coretto P, Hennig C (2016) Robust improper maximum likelihood: tuning, computation, and a comparison with other methods for robust gaussian clustering. Journal of the American Statistical Association 111(516):1648–1659
  • Day (1969)

    Day N (1969) Estimating the components of a mixture of normal distributions. Biometrika 56(3):463–474

  • Dempster et al. (1977) Dempster A, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society Series B (methodological) pp 1–38
  • Dotto et al. (2016) Dotto F, Farcomeni A, Garcia-Escudero LA, Mayo-Iscar A (2016) A reweighting approach to robust clustering. Statistics and Computing pp 1–17
  • Farcomeni and Greco (2015a) Farcomeni A, Greco L (2015a) Robust methods for data reduction. CRC press
  • Farcomeni and Greco (2015b) Farcomeni A, Greco L (2015b) S-estimation of hidden markov models. Computational Statistics 30(1):57–80
  • Fraley and Raftery (1998) Fraley C, Raftery A (1998) How many clusters? which clustering method? answers via model-based cluster analysis. The computer journal 41(8):578–588
  • Fraley and Raftery (2002) Fraley C, Raftery A (2002) Model-based clustering, discriminant analysis, and density estimation. Journal of the American statistical Association 97(458):611–631
  • Fraley et al. (2012) Fraley C, Raftery A, Murphy T, Scrucca L (2012) mclust version 4 for r: Normal mixture modeling for model-based clustering, classification, and density estimation. Technical Report 597, University of Washington, Seattle
  • Fritz et al. (2013) Fritz H, Garcia-Escudero L, Mayo-Iscar A (2013) A fast algorithm for robust constrained clustering. Computational Statistics Data Analysis 61:124–136
  • Garcia-Escudero et al. (2008) Garcia-Escudero L, Gordaliza A, Matran C, Mayo-Iscar A (2008) A general trimming approach to robust cluster analysis. The Annals of Statistics pp 1324–1345
  • Garcia-Escudero et al. (2015) Garcia-Escudero L, Gordaliza A, Matran C, Mayo-Iscar A (2015) Avoiding spurious local maximizers in mixture modeling. Statistics and Computing 25(3):619–633
  • García-Escudero et al. (2011) García-Escudero LA, Gordaliza A, Matrán C, Mayo-Iscar A (2011) Exploring the number of groups in robust model-based clustering. Statistics and Computing 21(4):585–599
  • Greco (2017) Greco L (2017) Weighted likelihood based inference for . Communications in Statistics-Simulation and Computation 46(10):7777–7789
  • Helliwell et al. (2018) Helliwell J, Layard R, Sachs J (2018) World Happiness Report 2018
  • Kuchibhotla and Basu (2015) Kuchibhotla A, Basu A (2015) A general set up for minimum disparity estimation. Statistics and Probability Letters 96:68–74
  • Kuchibhotla and Basu (2018) Kuchibhotla A, Basu A (2018) A minimum distance weighted likelihood method of estimation. Tech. rep., Interdisciplinary Statistical Research Unit (ISRU), Indian Statistical Institute, Kolkata, India, URL https://faculty.wharton.upenn.edu/wp-content/uploads/2018/02/attemptv4p1.pdf
  • Lee and McLachlan (2014) Lee S, McLachlan G (2014) Finite mixtures of multivariate skew t-distributions: some recent and new results. Statistics and Computing 24(2):181–202
  • Lin (2010) Lin T (2010) Robust mixture modeling using multivariate skew t distributions. Statistics and Computing 20(3):343–356
  • Markatou (2000) Markatou M (2000) Mixture models, robustness, and the weighted likelihood methodology. Biometrics 56(2):483–486
  • Markatou et al. (1998) Markatou M, Basu A, Lindsay BG (1998) Weighted likelihood equations with bootstrap root search. Journal of the American Statistical Association 93(442):740–750
  • Maronna and Jacovkis (1974) Maronna R, Jacovkis P (1974) Multivariate clustering procedures with variable metrics. Biometrics pp 499–505
  • McLachlan and Peel (2004) McLachlan G, Peel D (2004) Finite mixture models. John Wiley & Sons
  • Neykov et al. (2007) Neykov N, Filzmoser P, Dimova R, Neytchev P (2007) Robust fitting of mixtures using the trimmed likelihood estimator. Computational Statistics & Data Analysis 52(1):299–308
  • Rousseeuw and Van Zomeren (1990) Rousseeuw P, Van Zomeren B (1990) Unmasking multivariate outliers and leverage points. Journal of the American Statistical association 85(411):633–639
  • Symon (1977) Symon M (1977) Clustering criterion and multi-variate normal mixture. Biometrics 77:35–43