Robustness of Maximum Correntropy Estimation Against Large Outliers

03/23/2017 ∙ by Badong Chen, et al. ∙ Xi'an Jiaotong University University of Florida Southwest Jiaotong University 0

The maximum correntropy criterion (MCC) has recently been successfully applied in robust regression, classification and adaptive filtering, where the correntropy is maximized instead of minimizing the well-known mean square error (MSE) to improve the robustness with respect to outliers (or impulsive noises). Considerable efforts have been devoted to develop various robust adaptive algorithms under MCC, but so far little insight has been gained as to how the optimal solution will be affected by outliers. In this work, we study this problem in the context of parameter estimation for a simple linear errors-in-variables (EIV) model where all variables are scalar. Under certain conditions, we derive an upper bound on the absolute value of the estimation error and show that the optimal solution under MCC can be very close to the true value of the unknown parameter even with outliers (whose values can be arbitrarily large) in both input and output variables. Illustrative examples are presented to verify and clarify the theory.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Second order statistical measures (e.g. MSE, variance, correlation, etc.) are most widely used in machine learning, signal processing and control applications due to their simplicity and efficiency. The learning performances with these measures will, however, deteriorate dramatically when the data contain outliers (which significantly deviate from the bulk of data). Robust statistical measures against outliers (or impulsive noises) are thus of great practical interests, among which the fractional lower order moments (FLOMs)

[1, 2], least absolute deviation (LAD) [3, 4, 5] and M-estimation costs [6, 7, 8] are two typical examples. In particular, recently the correntropy as an interesting local similarity measure provides a promising alternative for robust learning in impulsive noise environments [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Since correntropy is insensitive to large errors (usually caused by some outliers), it can suppress the adverse effects of outliers with large amplitudes. Under the maximum correntropy criterion (MCC), the regression (or adaptive filtering) problem can be formulated as maximizing the correntropy between the desired responses and model outputs [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27].

Up to now, many adaptive algorithms (gradient based, fixed-point based, half-quadratic based, etc.) under MCC have been developed to improve the learning performance in presence of outliers[13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. However, so far little insight has been gained regarding the impact of outliers on the optimal solution under MCC. In the present work, we will attempt to study this problem in order to get a better understanding of the robustness of MCC criterion. To simplify the analysis, we focus on the problem of parameter estimation for a simple linear errors-in-variables (EIV) model [28] in which all variables are scalar. Under certain conditions, we derive an upper bound on the absolute value of the estimation error. Based on the derived results, we may conclude that the optimal estimate under MCC can be very close to the true value of the unknown parameter even in presence of outliers (whose values can be arbitrarily large) in both input and output variables.

The rest of the paper is organized as follows. In section II, we describe the problem under consideration. In section III, we derive the main results. In section IV, we present illustrative examples, and in section V we give the conclusion.

Fig. 1: Simple errors-in-variables model

Ii Mcc Based Parameter Estimation for Simple Eiv Model

Consider a simple linear EIV model as shown in Fig. 1, where denotes an unknown scalar parameter that needs to be estimated. Let be the true but unobserved input of the unknown system at instant , and be the observed output. The observed output and true input of the unknown system are related via

(1)

where denotes the output (observation) noise. In addition, is the model’s parameter, and is the observed input in which stands for the input noise. In general, both and are assumed to be independent of . The model’s output is given by

(2)

Our goal is thus to determine the value of such that it is as close to as possible. A simple approach is to solve by minimizing the MSE, that is,

(3)

where is the error between observed output and model output, denotes the expectation operator, and stands for the optimal solution under MSE. However, the above solution usually leads to inconsistent estimate, i.e. the parameter estimate does not tend to the true value even with very large samples. Some sophisticated methods such as total least squares (TLS) [29, 30, 31, 32, 33, 34]

may give an unbiased estimate, but prior knowledge has to be used and the computational cost is also relatively high.

Another approach is based on the MCC. In this way the model parameter is determined by [9, 10, 11]

(4)

where is the correntropy between and , with being the kernel bandwidth, and denotes the corresponding optimal solution. Note that the kernel width is a key free parameter in MCC, which controls the robustness of the estimator. When the kernel width is very large, the MCC will be approximately equivalent to the MSE criterion. In most practical situations, however, the error distribution is usually unknown, and one has to use the sample mean to approximate the expected value. Given observed input-output samples , the MCC estimation can be solved by

(5)

where

is the sample mean estimator of correntropy. Throughout this paper, our notation does not distinguish between random variables and their realizations, which should be clear from the context. It is worth noting that in practical applications, the empirical approximation in (5) is often used as the optimization cost although it does not necessarily approach the expected value when samples go infinite.

The non-concave optimization problem in (5) has no closed-form solution but can be effectively solved by using some iterative algorithms such as gradient based methods [17, 18], fixed-point methods [22, 25], half-quadratic methods [14, 15, 16]

, or evolutionary algorithms such as estimation of distribution algorithm (EDA)

[35, 36, 37].

Iii Main Results

Before proceeding, we give some notations and assumptions. Let and be two non-negative numbers, be the sample index set, and be a subset of satisfying , . In addition, the following two assumptions are made:

Assumption1: , where denotes the cardinality of the set ;

Assumption2: such that , .

Remark 1: The Assumption 1 means that there are ( more than ) samples in which the amplitudes of the input and output noises satisfy , and (at least one) samples that may contain large outliers with or (possibly or ). The Assumption 2 is reasonable since for a finite number of samples, the minimum amplitude is in general larger than zero.

With the above notations and assumptions, the following theorem holds:

Theorem 1: If , then the optimal solution under MCC criterion satisfies , where
.

Proof: Since , we have . To prove , it will suffice to prove for any satisfying . Since , we have . As , it follows easily that

(6)

Further, if , we have ,

(7)

where (a) comes from and , and (b) follows from the Assumption 2 and and . Thus ,

(8)

Then we have for any satisfying , because

(9)

where (c) comes from , and (d) follows from , . This completes the proof.

The following two corollaries are direct consequences of Theorem 1.

Corollary 1: Assume that , and let , with . Then the optimal solution under MCC satisfies , where

(10)

Corollary 2: If , then the optimal solution under MCC satisfies , where

(11)

Remark 2: According to Corollary 1, if and kernel width is larger than a certain value, the absolute value of the estimation error will be upper bounded by (10). In particular, if both and are very small, the upper bound will also be very small. This implies that the MCC solution can be very close to the true value ( ) even in presence of outliers whose values can be arbitrarily large, provided that there are ( ) samples disturbed by small noises (bounded by and ). In the extreme case, as stated in Corollary 2, if , we have as . In this case, the MCC estimation is almost unbiased as the kernel width is small enough.

It is worth noting that, due to the inequalities used in the derivation, the real errors in practical situations are usually much smaller and rather far from the derived upper bound . This fact will be confirmed by the simulation results provided in the next section.

Remark 3: Although the analysis results in this paper cannot be applied directly to improve the estimation performance in practice, they explain clearly why and how the MCC estimation is robust with respect to outliers especially those with large amplitudes. In addition, according to Theorem 1 and Corollary 1-2, the kernel bandwidth plays an important role in MCC, which should be set to a proper value (possibly close to the threshold ) so as to achieve the best performance. How to optimize the bandwidth in practice is however a very complicated problem and is left open in this work.

Remark 4: In robust statistics theory, there is a very important concept called breakdown point, which quantifies the smallest proportion of “bad” data in a sample that a statistics can tolerate before returning arbitrary values. The MCC estimator is essentially a redescending M-estimator, whose breakdown point has been extensively studied in the literature [38, 39, 40, 41, 42]. In particular, it has been shown that the breakdown point of the redescending M-estimators with a bounded objective function can be very close to 1/2 in the location estimation [38, 39]. This work however investigates the robustness of a special redescending M-estimator, namely the MCC estimator, in different ways: 1) an EIV model is considered; 2) a bound on the estimation error is derived.

Fig. 2: Optimal solutions , , , and the region between with different ( )
Fig. 3: Optimal solutions , , , and the region between with different ( )
TABLE I: Mean deviation results of the optimal solutions under MSE, LAD, TLS and MCC with different ( )
TABLE II: Mean deviation results of the optimal solutions under MSE, LAD, TLS and MCC with different ( )

Iv Illustrative Examples

Iv-a Example 1

We assume that the true value of the parameter in Fig.1 is , and the true input signal

is uniformly distributed over

. The input noise and output noise

are assumed to be of Gaussian mixture model, given by

(12)
(13)

where denotes a Gaussian density function with mean and variance , are two weighting factors that control the proportions of the outliers (located around or ) in the observed input and output signals. In the simulations below, without mentioned otherwise the variances are , and the weighting factors are set to . The MCC solutions are solved by using the estimation of distribution algorithm (EDA)[35, 36, 37].

First, we illustrate the optimal solutions under MSE, LAD, TLS and MCC with different amplitudes of outliers. Note that the larger the values of and , the larger the outliers. Fig. 2 shows the optimal solutions , , , and the region between with different , where is fixed at . For different , 1000 i.i.d. samples are generated, and is computed using (10) with , , and . Similarly, Fig. 3 shows the optimal solutions , , , and the region between with different , where is fixed at . The corresponding “mean deviation” results of the optimal solutions over 100 Monte Carlos runs are given in Table I and Table II. From these results we can observe: 1) the MCC solution lies within the region between , being rather close to the true value and very little influenced by both input and output outliers; 2) the estimation error can be much smaller in amplitude than the upper bound ; 3) MSE, LAD and TLS solutions are sensitive to outliers and can go far beyond the region between . Especially, the MSE solutions are very sensitive to the input outliers.

Second, we show how the solutions will be affected by the outliers’ occurrence probabilities (namely

and ). Fig. 4 illustrates the optimal solutions under MSE, LAD, TLS and MCC with different , where other parameters are set to and . As one can see, the MCC solution will be very close to the true value (almost unbiased) when is smaller than a certain value, although it will get worse dramatically, going far from the true value as is further increased. The MSE, LAD and TLS solutions, however, will get worse with increasing, even when is very small (namely, the outliers are very sparse). Notice that if is too large, the Assumption 1 may not hold and the derived upper bound will be inapplicable.

Further, we illustrate in Fig. 5 the optimal solutions under MSE and MCC with different kernel widths ( ), where and are . As expected, the MCC solution will approach the MSE solution as the kernel width is increased. In order to keep the robustness against large outliers, the kernel width in MCC should be set to a relatively small value in general.

Iv-B Example 2

The problem considered in this study is that of estimating a simple EIV model with only one unknown parameter. It is very important to extend the current results to multi-dimensional dynamic case. This is, however, not straightforward since the inequality (a) in (7) does not hold for multi-dimensional case. Here, we present a simulation study for such case and our simulation results suggest that a dynamic EIV model can also be robustly estimated by MCC. Let’s consider a 9-taped FIR system with weight vector

. The true input signal is zero-mean Gaussian with variance 1.0 and the distribution models of the input and output noises are assumed to be the same as those in the previous example. In the simulation, the variances are , the weighting factors are and 2000 i.i.d samples are generated. The squared weight error norm of MSE, LAD, TLS and MCC with different amplitudes of input and output outliers are presented in Fig.6 ( is fixed at 2.0) and Fig.7 ( is fixed at 5.0). From Fig.6 and Fig.7, one can see that the squared weight error norm of MCC is very small and little affected by both input and output outliers, while other methods are sensitive to outliers, especially, to input outliers.

Fig. 4: Optimal solutions under MSE, LAD, TLS and MCC with different ( , )
Fig. 5: Optimal solutions under MSE and MCC with different kernel widths ( )
Fig. 6: Squared weight error norm of MSE, LAD, TLS and MCC with different ( )
Fig. 7: Squared weight error norm of MSE, LAD, TLS and MCC with different ( )

V Conclusion

We investigated in this work the robustness of the maximum correntropy criterion against large outliers, in the context of parameter estimation for a simple linear errors-in-variables (EIV) model where all variables are scalar. Under certain conditions, we derived an upper bound on the amplitude of the estimation error. The obtained results suggest that the MCC estimation can be very close to the true value of the unknown parameter even with outliers (whose values can be arbitrarily large) in both input and output variables. The analysis results have been verified by illustrative examples. Extending the results of this study from simple EIV model to multivariable case is however not straightforward. This remains a challenge for future study.

References

  • [1] Min Shao and Chrysostomos L Nikias. Signal processing with fractional lower order moments: stable processes and their applications. Proceedings of the IEEE, 81(7):986–1010, 1993.
  • [2] Chrysostomos L Nikias and Min Shao. Signal Processing with Alpha-stable Distributions and Applications. Wiley-Interscience, 1995.
  • [3] James L Powell. Least absolute deviations estimation for the censored regression model ?. Journal of Econometrics, 25(3):303–325, 1984.
  • [4] David Pollard. Asymptotics for least absolute deviation regression estimators. Econometric Theory, 7(2):186–199, 1991.
  • [5] Liang Peng and Qiwei Yao. Least absolute deviations estimation for arch and garch models. Biometrika, 90(4):967–975, 2003.
  • [6] Peter J Rousseeuw and Annick M Leroy.

    Robust Regression and Outlier Detection

    , volume 589.
    John Wiley & Sons, 2005.
  • [7] Yuexian Zou, Shing-Chow Chan, and Tung-Sang Ng. Least mean m-estimate algorithms for robust adaptive filtering in impulse noise. Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on, 47(12):1564–1569, 2000.
  • [8] Shing-Chow Chan and Yue-Xian Zou. A recursive least m-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis. Signal Processing, IEEE Transactions on, 52(4):975–991, 2004.
  • [9] Weifeng Liu, Puskal P Pokharel, and José C Príncipe. Correntropy: properties and applications in non-gaussian signal processing. Signal Processing, IEEE Transactions on, 55(11):5286–5298, 2007.
  • [10] Jose C Principe. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer Science & Business Media, 2010.
  • [11] Badong Chen, Yu Zhu, Jinchun Hu, and Jose C Principe. System Parameter Identification: Information Criteria and Algorithms. Newnes, 2013.
  • [12] Badong Chen and José C Príncipe. Maximum correntropy estimation is a smoothed map estimation. Signal Processing Letters, IEEE, 19(8):491–494, 2012.
  • [13] Abhishek Singh and Jose C Principe.

    A loss function for classification based on a robust similarity metric.

    In Neural Networks (IJCNN), The 2010 International Joint Conference on, pages 1–6. IEEE, 2010.
  • [14] Ran He, Bao-Gang Hu, Wei-Shi Zheng, and Xiang-Wei Kong.

    Robust principal component analysis based on maximum correntropy criterion.

    Image Processing, IEEE Transactions on, 20(6):1485–1494, 2011.
  • [15] Ran He, Wei-Shi Zheng, Tieniu Tan, and Zhenan Sun. Half-quadratic-based iterative minimization for robust sparse representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(2):261–275, 2014.
  • [16] Ran He, Tieniu Tan, and Liang Wang. Robust recovery of corrupted low-rankmatrix by implicit regularizers. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(4):770–783, 2014.
  • [17] Abhishek Singh and Jose C Principe. Using correntropy as a cost function in linear adaptive filters. In Neural Networks, 2009. IJCNN 2009. International Joint Conference on, pages 2950–2955. IEEE, 2009.
  • [18] Songlin Zhao, Badong Chen, and Jose C Principe. Kernel adaptive filtering with maximum correntropy criterion. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 2012–2017. IEEE, 2011.
  • [19] Zongze Wu, Jiahao Shi, Xie Zhang, Wentao Ma, and Badong Chen. Kernel recursive maximum correntropy. Signal Processing, 117:11–26, 2015.
  • [20] Ren Wang, Badong Chen, Nanning Zheng, and Jose C Principe. A variable step-size adaptive algorithm under maximum correntropy criterion. In Neural Networks (IJCNN), 2015 International Joint Conference on, pages 1–5. IEEE, 2015.
  • [21] Liming Shi and Yun Lin. Convex combination of adaptive filters under the maximum correntropy criterion in impulsive interference. Signal Processing Letters, IEEE, 21(11):1385–1388, 2014.
  • [22] Badong Chen, Xi Liu, Haiquan Zhao, and José C Príncipe. Maximum correntropy kalman filter. arXiv preprint arXiv:1509.04580, 2015.
  • [23] Fei Zhu, Abderrahim Halimi, Paul Honeine, Badong Chen, and Nanning Zheng. Correntropy maximization via admm: Application to robust hyperspectral unmixing. IEEE Transactions on Geoscience & Remote Sensing, PP(99):1–12, 2016.
  • [24] Badong Chen, Lei Xing, Junli Liang, Nanning Zheng, and José C Príncipe. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. Signal Processing Letters, IEEE, 21(7):880–884, 2014.
  • [25] Badong Chen, Jianji Wang, Haiquan Zhao, Nanning Zheng, and José C Príncipe. Convergence of a fixed-point algorithm under maximum correntropy criterion. Signal Processing Letters, IEEE, 22(10):1723–1727, 2015.
  • [26] Zongze Wu, Siyuan Peng, Badong Chen, and Haiquan Zhao. Robust hammerstein adaptive filtering under maximum correntropy criterion. Entropy, 17(10):7149–7166, 2015.
  • [27] Badong Chen, Lei Xing, Haiquan Zhao, Nanning Zheng, and José C Príncipe. Generalized correntropy for robust adaptive filtering. Signal Processing, IEEE Transactions on, 64(13):3376–3387, 2016.
  • [28] Torsten Söderström. Errors-in-variables methods in system identification. Automatica, 43(6):939–958, 2007.
  • [29] Ivan Markovsky and Sabine Van Huffel. Overview of total least-squares methods. Signal processing, 87(10):2283–2302, 2007.
  • [30] Sabine Van Huffel and Joos Vandewalle. The Total Least Squares Problem: Computational Aspects and Analysis, volume 9. Siam, 1991.
  • [31] Gene H Golub and Charles F Van Loan. An analysis of the total least squares problem. SIAM Journal on Numerical Analysis, 17(6):883–893, 1980.
  • [32] Ivan Markovsky, Jan C Willems, Sabine Van Huffel, Bart De Moor, and Rik Pintelon. Application of structured total least squares for system identification and model reduction. Automatic Control, IEEE Transactions on, 50(10):1490–1500, 2005.
  • [33] Bart De Moor and Joos Vandewalle. A unifying theorem for linear and total linear least squares. Automatic Control, IEEE Transactions on, 35(5):563–566, 1990.
  • [34] Berend Roorda and Christiaan Heij. Global total least squares modeling of multivariable time series. Automatic Control, IEEE Transactions on, 40(1):50–63, 1995.
  • [35] Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao. On the analysis of average time complexity of estimation of distribution algorithms. In

    IEEE Congress on Evolutionary Computation

    , pages 453–460, 2007.
  • [36] R. Rastegar and M. R. Meybodi. A study on the global convergence time complexity of estimation of distribution algorithms. In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, pages 441–450, 2005.
  • [37] Qingfu Zhang and H Muhlenbein. On the convergence of a class of estimation of distribution algorithms. Evolutionary Computation IEEE Transactions on, 8(2):127–136, 2004.
  • [38] Peter J. Huber. Finite sample breakdown of m- and p-estimators. Annals of Statistics, 12(1):119–126, 1984.
  • [39] Zhiqiang Chen and David E. Tyler. On the finite sample breakdown points of redescending m -estimates of location. Statistics & Probability Letters, 69(3):233–242, 2004.
  • [40] Victor J. Yohai. High breakdown-point and high efficiency robust estimates for regression. Annals of Statistics, 15(2):642–656, 1987.
  • [41] Ricardo A. Maronna and Victor J. Yohai. The breakdown point of simultaneous general m estimates of regression and scale. Journal of the American Statistical Association, 86(415):699–703, 1991.
  • [42] R. A. Davis, Wtm Dunsmuir, and Y. Wang. Breakdown points of t-type regression estimators. Biometrika, 87(3):675–687, 2000.