Robust Multivariate Estimation Based On Statistical Data Depth Filters

In the classical contamination models, such as the gross-error (Huber and Tukey contamination model or Case-wise Contamination), observations are considered as the units to be identified as outliers or not, this model is very useful when the number of considered variables is moderately small. Alqallaf et al. [2009] shows the limits of this approach for a larger number of variables and introduced the Independent contamination model (Cell-wise Contamination) where now the cells are the units to be identified as outliers or not. One approach to deal, at the same time, with both type of contaminations is filter out the contaminated cells from the data set and then apply a robust procedure able to handle case-wise outliers and missing values. Here we develop a general framework to build filters in any dimension based on statistical data depth functions. We show that previous approaches, e.g., Agostinelli et al. [2015a] and Leung et al. [2017] are special cases. We illustrate our method by using the half-space depth.

There are no comments yet.

Authors

• 4 publications
• 13 publications
06/04/2018

MacroPCA: An all-in-one PCA method allowing for missing values as well as cellwise and rowwise outliers

Multivariate data are typically represented by a rectangular matrix (tab...
01/29/2012

A robust and sparse K-means clustering algorithm

In many situations where the interest lies in identifying clusters one m...
11/29/2019

A robust method based on LOVO functions for solving least squares problems

The robust adjustment of nonlinear models to data is considered in this ...
07/02/2020

Adapting k-means algorithms for outliers

This paper shows how to adapt several simple and classical sampling-base...
07/06/2020

Surprise sampling: improving and extending the local case-control sampling

Fithian and Hastie (2014) proposed a new sampling scheme called local ca...
07/02/2021

A Robust Seemingly Unrelated Regressions For Row-Wise And Cell-Wise Contamination

The Seemingly Unrelated Regressions (SUR) model is a wide used estimatio...
12/29/2020

Correlation Across Environments Encoded by Hippocampal Place Cells

The hippocampus is often attributed to episodic memory formation and sto...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of most common problem in real data is the presence of outliers, i.e. observations that are well separated from the bulk of data, that may be errors that affect the data analysis or can suggest unexpected information. According to the classical Tukey-Huber Contamination Model (THCM), a small fraction of rows can be contaminated and these are the units considered as outliers. Since the ’s many methods have been developed in order to be less sensitive to such outlying observations. A complete introduction and explanation of the developments in robust statistics is given in the book by Maronna et al. [2006].

In some application, e.g. in modern high-dimensional data sets, the entries of an observation (or cells) can be independently contaminated.

Alqallaf et al. [2009] first formulated the Independent Contamination Model (ICM), taking into consideration this cell-wise contamination scheme. According to this paradigm, given a fraction of contaminated cells, the expected fraction of contaminated rows is

 1−(1−ϵ)p

which exceeds the breakdown point for increasing value of the contaminatin level and the dimension . Traditional robust estimators may fail in this situation. Furthermore, Agostinelli et al. [2015b] shows that both type of outliers, case-wise and cell-wise, can occur simultaneously.

Gervini and Yohai [2002]

introduced the idea of an adaptive univariate filter, identifying the proportion of outliers in the sample measuring the difference between the empirical distribution and a reference distribution. Then, it is used to compute an adaptive cutoff value, and finally a robust and efficient weighted least squares estimator is defined. Starting from this concept of outlier detection,

Agostinelli et al. [2015a] introduced a two-step procedure: in the first step large cell-wise outliers are flagged by the univariate filter and replaced by NA’s values [a technique called snipping in Farcomeni, 2014]; in the second step a Generalized S-Estimator [Danilov et al., 2012] is applied to deal with case-wise outliers. The choice of using GSE is due to the fact that it has been specifically designed to cope with missing values in multivariate data. Leung et al. [2017] improved this procedure proposing the following modifications:

• They combined the univariate filter with a bivariate filter to take into account the correlations among variables.

• In order to handle also moderate cell-wise outliers, they proposed a filter as intersection between the univariate-bivariate filter and Detect Deviating Cells (DDC), a filter procedure introduced by Rousseeuw and Van Den Bossche [2018].

• Finally, they constructed a Generalized Rocke S-estimator (GRE) replacing the GSE, to face the lost of robustness in case of high-dimensional case-wise outliers.

Here, we want to define a new filter in general dimension , with , based on the statistical data depth functions and it will be used in combination with the GSE. Note that if we filter the cell-wise outliers considering the variables independent. Section 2 introduces the main idea on how to construct filters based on statistical depth functions, in subsection 2.1 we illustrate the procedure by using the half-space depth function while in subsections 2.2 and 2.3 we introduce two different strategies to mark observations/cells as outliers. Section 3 shows how the approaches in Agostinelli et al. [2015a] and Leung et al. [2017] are special cases of our framework and we introduce a statistical data depth function namely Gervini-Yohai depth function. Section 4 illustrates the features of our approach using a real data set while Section 5 reports the results of a Monte Carlo experiment. Appendix A discusses general properties a statistical data depth function should have, Appendix B studies the Gervini-Yohai depth properties and Appendix C contains full results of the Monte Carlo experiment.

2 Filters based on Statistical Data Depth Function

Let be a

-valued random variable with distribution function

. For a point , we consider the statistical data depth of with respect to be such that satisfies the four properties given in Liu [1990] and Zuo and Serfling [2000a] and reported in Appendix A of the Supplementary Material. Given an independent and identically distributed sample of size , we denote its empirical distribution function and by the sample depth. We assume that, is a uniform consistent estimator of , that is,

 supx|d(x;^Fn)−d(x;F)|a.s.→0n→∞,

a property enjoined by many statistical data depth functions, e.g., among others simplicial depth [Liu, 1990], half-space depth [Tukey, 1975]. One important feature of the depth functions is the -depth trimmed region given by ; for any , we will denote the smallest region

that has probability larger that or equal to

according to . Throughout, subscripts and superscripts for depth regions are used for depth levels and probability contents, respectively. Let be the complement in of the set . Let , be the maximum of the depth, for simplicial depth , for half-space depth .

Given a high order quantile

, we define a filter of dimension based on

 dn=supx∈Cβ(F){d(x;^Fn)−d(x;F)}+, (1)

where represents the positive part of , and we mark as outliers all the observations with the smallest population depth (where is the largest integer less then or equal to ). This define a filter in the general dimension .

We have the following result, with obvious proof.

Proposition 1.

If (a.s.) then as .

If the above result holds, then the filter would be consistent. In the next subsection we are going to illustrate this approach using the half-space depth.

2.1 Filters based on Half-space Depth

Definition 1 (Half-space depth).

Let be a -valued random variable with distribution function . For a point , the half-space depth of with respect to is defined as the minimum probability of all closed half-spaces including :

 dHS(x;F)=minH∈H(x)PF(X∈H).

where indicates the set of all half-spaces in containing .

A random vector

is said elliptically symmetric distributed, denoted by , if it has a density function given by

 f0(x)∝|Σ−1/2|h((x−μ)⊤Σ−1(x−μ)).

where is a non-negative scalar function, is the location parameter and is a positive definite matrix. Denote by the corresponding distribution function and by the squared Mahalanobis distance of a -dimensional point . By Theorem 3.3 of Zuo and Serfling [2000b] if a depth is affine equivariant (1) and has maximum at (2) (see Appendix A) then a depth is such that for some non increasing function and we can restrict ourselves without loss of generality, to the case and where

is the identity matrix of dimension

. Under this setting, it is easy to see that the half-space depth of a given point is given by , where is a marginal distribution of .

If the function is such that

 exp(−12Δ)h(Δ)→0,Δ→∞,

then, there exists a such that for all so that , , where is the distribution function of the standard normal. Hence,

 sup{x:Δx>Δ∗}[dHS(x;Φ)−dHS(x;F0)]<0

and therefore, for all ,

 supCβ(F0)[dHS(x;Φ)−dHS(x;F0)]<0 .

Given an independent and identically distributed sample , we define the filter in general dimension introduced previously, where here we use the half-space depth

 dn=supx∈Cβ(F){dHS(x;^Fn)−dHS(x;F(T0n,C0n))}+,

where is a high order quantile, is the empirical distribution function and is a chosen reference distribution which depends on a pair of initial location and dispersion estimators, and

. Hereafter, we are going to use the normal distribution

. For and one might use, e.g., the coordinate-wise median and the coordinate-wise MAD for a univariate filter as in Leung et al. [2017]. In order to compute the value , we have to identify the set where is a large quantile of . By Corollary 4.3 in Zuo and Serfling [2000b], and denoting with the squared Mahalanobis distance of using the initial location and dispersion estimates, the set can be rewritten as , where

is a large quantile of a chi-squared distribution with

degrees of freedom.

Now we want to show that the result given by Proposition 1 holds for this particular case.

Proposition 2.

Consider a random vector and suppose that is an elliptically symmetric distribution. Also consider a pair of location and dispersion estimators and such that and a.s.. Let be a chosen reference distribution and the empirical distribution function. If the reference distribution satisfies

 supx∈Cβ(F0)[dHS(x;F)−dHS(x;F0)]<0

where is some large quantile of , then

 ndn→0 as n→∞
Proof.

In Donoho and Gasko [1992], it is proved that for i.i.d. with distribution , as

 supt∈Rd|dHS(t,F0)−dHS(t,^Fn)|→0 a.s.

Note that, by the continuity of , a.s.. Hence, for each there exists such that for all we have

 supx∈Cβ(F0){dHS(x; ^Fn)−dHS(x;F(T0n,C0n))}≤ supx∈Cβ(F0){dHS(x;^Fn)−dHS(x;F0(μ0,Σ0))}+ supx∈Cβ(F0){dHS(x;F0(μ0,Σ0))−dHS(x;F(μ0,Σ0))}+ supx∈Cβ(F0){dHS(x;F(μ0,Σ0))−dHS(x;F(T0n,C0n))} ≤ ε2+0+ε2=ε

In the next example, we illustrate a univariate filter based on half-space depth that controls independently the left and the right tail of the distribution.

Example 1 (Univariate filter with two-tails control).

In the univariate case, given a point there exist only two halfspaces including it, hence the half-space depth assumes the explicit form

 dHS(x;F) =min(PF((−∞,x]),PF([x,∞))) =min(F(x),1−F(x)+PF(X=x)),

and considering the empirical distribution function , the halfspace depth will be

 dHS(x,^Fn)=min(1nn∑i=1I(Xi≤x),1nn∑i=1I(Xi≥x)).

Consider and , a pair of initial location and dispersion estimators. Here we choose for and respectively the coordinate-wise median and the median absolute deviation (MAD). For each variable (), we denote the standardized version of by . Let a chosen reference distribution for ; here we use the standard normal distribution, i.e., . Let be the empirical distribution for the standardized values, that is

 ^Fn,j(t)=1nn∑i=1I(Zij≤t)j=1,…,p.

We define the proportion of flagged outliers by

 dn,j=max(supt≤−ηβ,j{dHS(t,^Fn,j)−dHS(t,Fj)}+;supt≥ηβ,j{dHS(t,^Fn,j)−dHS(t,Fj)}+),

where is a large quantile of . Note that, according to (1), we are considering the set , which results in the simpler form written above considering the definition of the half-space depth in the univariate case. Here, if we consider the order statistics , define and . Using the definition of half-space depth function in the univariate case, presented above, the previous expression can be written as

 dn,j=max(supii+{Fj(Z(i),j)−i−1n}+). (2)

Then, we flag observations with the smallest depth value as cell-wise outliers and replace them by NA’s.

2.2 A consistent univariate, bivariate and p-variate filter

Given a sample where , we first apply the univariate filter described in the previous example to each variable separately. Filtered data are indicated through an auxiliary matrix of zeros and ones, with zero corresponding to a NA value. Next we want to identify the bivariate outliers by iterating the filter over all possible pairs of variables. Consider a pair of variables . The initial location and dispersion estimators are, respectively, the coordinate-wise median and the sub-matrix of the estimate computed by the generalized S-estimator on non-filtered data. Note that, this ensure the positive definiteness property for and each sub-matrix corresponding to a subset of variables. For bivariate points with no flagged components by the univariate filter we compute the squared Mahalanobis distance and hence apply the bivariate filter, for all . At the end we want to identify the cells which have to be flagged as cell-wise outliers. The procedure used for this purpose is described in Leung et al. [2017] and reported here. Let

 J={(i,j,k):Δ(jk)i is flagged as bivariate outlier}

be the set of triplets which identifies the pairs of cells flagged by the bivariate filter in rows . For each cell in the data, we count the number of flagged pairs in the -th row in which the considered cell is involved:

 mij=#{k:(i,j,k)∈J}.

In absence of contamination,

follows approximately a binomial distribution

where represents the overall proportion of cell-wise outliers undetected by the univariate filter. Hence, we flag the cell if , where is the -quantile of . Finally, we perform the -variate filter as described in subsection 2.1 to the full data matrix. Detected observations (rows) are directly flagged as -variate (case-wise) outliers. We denote the procedure based on univariate, bivariate and -variate filters, HS-UBPF.

2.3 A sequencing filtering procedure

Suppose we would like to apply a sequence of filters with different dimension . For each , , the filter updates the data matrix adding NA values to the -tuples identified as -variate outliers. In this way, each filter applies only those -tuples that have not been flagged as outliers by the filters with lower dimension.

Initial values for each procedures rather than would be obtained by applying the GSE to the actual filtered values.

This procedure aims to be a valid alternative to that used in the presented HS-UBPF filter to perform a sequence of filters with different dimensions. However, this is a preliminary idea, indeed it has not been implemented yet.

3 Gervini-Yohai d-variate filter

In this Section we are going to show that the filters introduced in Agostinelli et al. [2015a] are a special case of our approach, using the following Gervini-Yohai depth

 dGY(t,F,G)=1−G(Δ(t,μ(F),Σ(F))),

where is a continuous distribution function, and are the location and scatter matrix functionals and is the squared Mahalanobis distance. Appendix B shows that this is a statistical data depth function. Let be a sequence of discrete distribution functions that might depends on and such that , we might define the finite sample version of the Gervini-Yohai depth as

 dGY(t,^Fn,Gn)=1−Gn(Δ(t,μ(^Fn),Σ(^Fn))) ,

however for filtering purpose we will use two alternative definitions later on. The use of , that might depend on the data, instead of makes this sample depth semiparametric. We notice that the Mahalanobis depth, which is completely parametric, cannot be used for the purpose of defining a filter in a similar fashion.

Let , be an -tuple of the integer numbers and, for easy of presentation, let be a subvector of dimension of . Consider a pair of initial location and scatter estimators

 T(d)0n=⎛⎜⎝T0n,j1…T0n,jd⎞⎟⎠ and C(d)0n=⎛⎜⎝C0n,j1j1…C0n,j1jd………C0n,jdj1…C0n,jdjd⎞⎟⎠ .

Now, define the squared Mahalanobis distance for a data point by . Consider the distribution function of a , the distribution function of and let be the empirical distribution function of (). We consider two finite sample version of the Gervini-Yohai depth, i.e.,

 dGY(t,^Fn,G)=1−G(Δ(t,^Fn)),

and

 dGY(t,^Fn,^Hn)=1−^Hn(Δ(t,^Fn)).

The proportion of flagged -variate outliers is defined by

 dn=supt∈A{dGY(t,^Fn,^Hn)−dGY(t,^Fn,G)}+.

Here , where is any point in such that and is a large quantile of . Then, we flag observations. It is easy to see that,

 dn =supt∈A{[1−^Hn(Δ(t,^Fn))]−[1−G(Δ(t,^Fn))]}+ =supt∈A{G(Δ(t,^Fn))−^Hn(Δ(t,^Fn))}+ =supΔ≥η{G(Δ)−^Hn(Δ)}+

since is a non increasing function of the squared Mahalanobis distance of the point .

We can rephrase Proposition 2. in Leung et al. [2017], that states the consistency property of the filter as follows.

Proposition 3.

Consider a random vector and a pair of location and scatter estimators and such that and a.s.. Consider any continuous distribution function and let be the empirical distribution function of and . If the distribution satisfies:

 maxt∈A{dGY(t,F0,H0)−dGY(t,F0,G)}≤0, (3)

where , where is any point in such that and is a large quantile of , then

 n0n→0a.s.

where

 n0=⌊ndn⌋.
Proof.

Note that

 dGY(t,^Fn,^Hn)−dGY(t,^Fn,G)=G(Δ(t,T0n,C0n))−^Hn(Δ(t,T0n,C0n))

and condition in equation (3) is equivalent to

 maxΔ≥η{G(Δ)−H0(Δ)}≤0,

The rest of the proof is the same as in Proposition 2. of Leung et al. [2017]. ∎

4 Example

We consider the weekly returns from to for a portfolio of 20 small-cap stocks used in Leung et al. [2017].

With this example we want to compare the filter introduced in Agostinelli et al. [2015a] (indicated as GY-UF in case of univariate filter and GY-UBF for univariate and bivariate filter) and the same filter with the improvements proposed in Leung et al. [2017] (indicated here as GY-UBF-DDC-C) to the presented filter based on statistical data depth functions, using the halfspace depth (HS-UF for the univariate filter, HS-UBF for the univariate-bivariate filter, HS-UBPF for the univariate-bivariate--variate filter and HS-UBPF-DDC-C for the combination of the HS-UBPF with the modifications in Leung et al. [2017]).

Figure 1 shows the normal QQ-plots of the 20 variables. The returns in all stocks seem to roughly follow a normal distribution, but with the presence of large outliers. The returns in each stock that lie 3 MAD’s away from the coordinate-wise median are displayed in green in the figure. In total, the of cells are outside; if these are cell-wise outliers then they propagate to of the cases.

Figure 2 shows the squared Mahalanobis distances (MDs) of the weekly returns based on the estimates given by the MLE, the GY-UF, the GY-UBF, the HS-UF, the HS-UBF and the HS-UBPF. Observations with one or more cells flagged as outliers are displayed in green. We say that the estimate identifies an outlier correctly if the MD exceeds the quantile of a chi-squared distribution with 20 degrees of freedom. We see that the MLE estimate does a very poor job recognizing only 8 of the 59 cases. The GY-UF, HS-UF, HS-UBF and HS-UBPF show a quite similar behavior, doing better then the MLE but they miss about one third of the cases. The GY-UBF identifies all but seven of the cases.

Figure 3 shows the Mahlanobis distances produced by GY-UBF-DDC-C and HS-UBPF-DDC-C. Here we can see that the GY-UBF-DDC-C misses 13 of 59 cases while the HS-UBPF-DDC-C has missed 15 cases. Although they seem not to do a better job, these two filters are able to flag some observations, not identified before, as case-wise outliers. These outliers are more clearly highlighted by HS-UBPF-DDC-C.

Figure 4 shows the bivariate scatter plot of WTS versus HTLD, HTLD versus WSBC and WSBC versus SUR where the GY-UBF and HS-UBF filters are applied, respectively. The bivariate observations with at least one component flagged as outlier are in blue, and outliers detected by the bivariate filter are in orange. We see that the HS-UBF identifies less outliers with respect to the GY-UBF.

5 Monte Carlo results

We performed a Monte Carlo simulation to assess the performance of the proposed filter based on halfspace depth. After the filter flags the outlying observations, the generalized S-estimator is applied to the data with added missing values. Our simulation study is based on the same setup described in Leung et al. [2017] to compare significantly the performance of our filter with respect to the filter introduced in their work.

We considered samples from a , where all values in are equal to , and the sample size is . We consider the following scenarios:

• Clean data: data without changes.

• Cell-Wise contamination: a proportion of cells in the data is replaced by , where .

• Case-Wise contamination: a proportion of cases in the data matrix is replaced by , where , and

is the eigenvector corresponding to the smallest eigenvalue of

with length such that .

The proportions of contaminated rows chosen for case-wise contamination are , and for cell-wise contamination. The number of replicates in our simulation study is .

We measure the performance of a given pair of location and scatter estimators and using the mean squared error (MSE) and the likelihood ratio test distance (LRT), as in Leung et al. [2017]:

 MSE=1NN∑i=1(^μi−μ0)⊤(^μi−μ0) LRT(^Σ,Σ0)=1NN∑i=1D(^Σi,Σ0)

where is the estimate of the -th replication and

is the Kullback-Leibler divergence between two Gaussian distributions with the same mean and variances

and . Finally, we computed the maximum average LRT distances considering all contamination values .

Table 1 shows the average LRT distances under cell-wise contamination. We see that the univarite and univariate-bivariate filters have more problems in filtering moderate cell-wise outliers (for example ), while show a constant and optimal behavior for increasing contamination values of . GY-UBF-DDC-C and HS-UBPF-DDC-C have lower maximum average LRT distances, but are higher for large . This behavior is shown in Figure 5 (top) where the average LRT distances versus different contamination values are displayed, with of cell-wise contamination level and .

Table 2 shows the maximum average LRT distances under case-wise contamination. Overall, the GY-UBP-DDC-C and HS-UBPF-DDC-C outperform all the other filters obtaining better results. Excluding these two, we see that the HS-UBPF is competitive in case of moderate case-wise contamination. An illustration of their behavior is given in Figure 6 (top) which shows the average LRT distances for different values of , with of case-wise contamination level and .

Table 3 and Table 4 show the maximum average MSE under cell-wise and case-wise contamination, respectively. The values in the tables are the MSE values multiplied by 1000 for a better visualization and model comparison. Under case-wise contamination, the GY-UBF-DDC-C and HS-UBPF-DDC-C outperform the other filters, and have also competitive results for cell-wise contamination. In Figure 5 (bottom) and Figure 6 (bottom) the average MSE versus different contamination values are displayed, with and of cell-wise contamination and of case-wise contamination respectively. We highlight the nice redescending performance of the HS-UBPF for both LRT and MSE, not shared by the other filters.

6 Conclusions

Considering the two-step procedure introduced in Agostinelli et al. [2015a] and improved by Leung et al. [2017], we present a new filter based on statistical data depth functions that can be used in place of the previous filters, intended as a generalization of such filters. Furthermore, we also combine the depth filter HS-UBPF and DDC, as suggested by Leung et al. [2017]. As shown in the example, the filter HS-UBPF is able to identify large outlying observations and removes less cells than the GY-UBF. In addition, it also detects the case-wise outliers, which are clearly highlighted.

If we consider the performance of the entire procedure, our simulations show that using HS-UBPF we obtain the best estimates in case of moderate proportion of contamination, but it is still competitive for higher percentage of contamination, also for high-dimensional dataset, under both types of contamination models. Generally, the GY-UBF and HS-UBPF combined with DDC outperform the other filters. Differences in performance of these two estimators are not clearly visible. However the HS-UBPF has shown, especially under the case-wise contamination an interesting behaviour for moderate contamination level.

Further research on this filter could be needed to explore the performance of the estimator in different types of data and how it can vary with respect to the dimensions and , for example in flat datasets (e.g., ). In addition different statistical data depth functions could be used in place of the half-space depth.

Appendix A Statistical data depth properties

Definition 2 (Depth Function).

A depth function

measures the centrality of a point w.r.t. a probability distribution

.

 d=Rp→R+∪{0},x→d(x;F)

A statistical depth function should satisfy the following Properties [Liu, 1990, Zuo and Serfling, 2000a]

1. [label=P0]

2. Affine invariance: ;

3. Maximality at center: if is “symmetric” around then for all ; for a more detailed discussion on symmetry see Serfling [2006].

4. Monotonicity: if (2) holds, then

 d(x;F)≤d(μ+α(x−μ);F)α∈[0,1] ;
5. Approaching zero: .

Appendix B Gervini-Yohai depth

Here we want to show that the Gervini-Yohai depth, defined as , is a proper statistical depth function, i.e., it satisfies the four properties introduced above.

1. Affine invariance: it follows directly from the affine invariance property of the Mahalanobis distance;

2. Maximality at center: if is elliptically symmetric around ,

 dGY(μ(F),F,G)=1−G(Δ(μ(F),μ(F),Σ(F)))=1−G(0).

For any we have

 Δ(t,μ(F),Σ(F)) >0 G(Δ(t,μ(F),Σ(F))) ≥G(0) 1−G(Δ(t,μ(F),Σ(F))) ≤1−G(0) dGY(t,F,G) ≤dGY(μ(F),F,G),

when is strictly monotone then strict inequality holds, and is the unique maximizer of the Gervini-Yohai depth.

3. Monotonicity:

 Δ(μ(F)+α(t−μ(F)),μ(F),Σ(F)) =(α(t−μ(F)))⊤Σ(F)−1(α(t−μ(F))) =α2(t−μ(F))⊤Σ(F)−1(t−μ(F)) =α2Δ(t,μ(F),Σ(F)) ≤Δ(t,μ(F),Σ(F))

Then .

4. Approaching zero: if we have that and consequently . Then

 dGY(t,F,G)=1−G(Δ(t,μ(F),Σ(F)))→0

Appendix C Monte Carlo experiment

Results for all combinations of the model parameters explored in the Monte Carlo simulation are reported in this section.

In Figures 7, 8 and Figures 9, 10 the average LRT and average MSE versus different contamination values are displayed, respectively.

Figures 11, 12 and Figures 13, 14 show the average LRT and average MSE versus different contamination values , respectively.

References

• Agostinelli et al. [2015a] C. Agostinelli, A. Leung, V.J. Yohai, and R.H. Zamar. Robust estimation of multivariate location and scatter in the presence of cellwise and casewise contamination. TEST, 24(3):441–461, 2015a.
• Agostinelli et al. [2015b] C. Agostinelli, A. Leung, V.J. Yohai, and R.H. Zamar. Rejoinder on: Robust estimation of multivariate location and scatter in the presence of cellwise and casewise contamination. TEST, 24(3):484–488, 2015b.
• Alqallaf et al. [2009] F. Alqallaf, S. Van Aelst, R. H. Zamar, and V. J. Yohai. Propagation of outliers in multivariate data. The Annals of Statistics, 37(1):311–331, 2009.
• Danilov et al. [2012] M. Danilov, V.J. Yohai, and R.H. Zamar. Robust estimation of multivariate location and scatter in the presence of missing data. Journal of the American Statistical Association, 107:1178–1186, 2012.
• Donoho and Gasko [1992] D.L. Donoho and M. Gasko. Breakdown properties of location estimates based on halfspace depth and projected outlyingness. The Annals of Statistics, 20(4):1803–1827, 1992.
• Farcomeni [2014] A Farcomeni. Robust constrained clustering in presence of entry-wise outliers. Technometrics, 56(1):102–111, 2014.
• Gervini and Yohai [2002] D. Gervini and V.J. Yohai. A class of robust and fully efficient regression estimators. The Annals of Statistics, 30(2):583–616, 2002.
• Leung et al. [2017] A. Leung, V.J. Yohai, and R.H. Zamar. Multivariate location and scatter matrix estimation under cellwise and casewise contamination. Computational Statistics and Data Analysis, 111:59–76, 2017.
• Liu [1990] R.Y. Liu. On a notion of data depth based on random simplices. The Annals of Statistics, 18(1):405–414, 1990.
• Maronna et al. [2006] R.A. Maronna, R.D. Martin, and Yohai V.J. Robust statistic: theory and methods. Wiley, Chichister, 2006.
• Rousseeuw and Van Den Bossche [2018] P.J. Rousseeuw and W. Van Den Bossche. Detecting deviating data cells. Technometrics, 60(2):135–145, 2018.
• Serfling [2006] R.J. Serfling. Multivariate symmetry and asymmetry. Encyclopedia of statistical sciences, pages 5338–5345, 2006.
• Tukey [1975] J.W. Tukey. Mathematics and picturing of data. In Proceedings of International Congress of Mathematics, volume 2, pages 523–531, 1975.
• Zuo and Serfling [2000a] Y. Zuo and R. Serfling. General notions of statistical depth function. The Annals of Statistics, 28(2):461–482, 2000a.
• Zuo and Serfling [2000b] Y. Zuo and R.J. Serfling. Structual properties and convergence results for contours of sample statistical depth functions. The Annals of Statistics, 28(2):483–499, 2000b.