Kalman filtering is a powerful technology for estimating the states of a dynamic system, which finds applications in many areas including navigation, guidance, data integration, pattern recognition, tracking and control systems[1, 2, 3, 4, 5]. The original Kalman filter (KF) was derived for a linear state space model with Gaussian assumption [6, 2]. To cope with nonlinear estimation problems, a variety of nonlinear extensions of the original Kalman filter have been proposed in the literature, including extended Kalman filter (EKF) [7, 8], unscented Kalman filter (UKF) , cubature Kalman filter (CKF)  and many others. However, most of these Kalman filters are developed based on the popular minimum mean square error (MMSE) criterion and will face performance degradation in case of complicated noises, since in general MMSE is not a good choice for the estimation in non-Gaussian noises.
In recent years, to solve the performance degradation problem in heavy-tailed (or impulsive) non-Gaussian noises, some robust Kalman filters have been developed by using certain non-MMSE criterion as the optimality criterion [11, 12]. Particularly, the maximum correntropy criterion (MCC) [13, 14] in information theoretic learning (ITL) [11, 12] has been successfully applied in Kalman filtering to improve the robustness against impulsive noises. Typical examples include the maximum correntropy based Kalman filters [15, 16, 17, 18, 19, 20, 21, 22, 23, 24], maximum correntropy based extended Kalman filters [25, 26, 27], maximum correntropy based unscented Kalman filters [28, 29, 30], maximum correntropy based square-root cubature Kalman filters [31, 32]
and so on. Since correntropy is a local similarity measure and insensitive to large errors, these MCC based filters are little influenced by large outliers[13, 33].
The MCC is a nice choice for dealing with heavy-tailed non-Gaussian noises, but its performance may not be good when facing more complicated non-Gaussian noises, such as noises from multimodal distributions. The minimum error entropy (MEE) criterion [34, 35] is another important learning criterion in ITL, which has been successfully applied in robust regression, classification, system identification and adaptive filtering [34, 35, 36, 37, 38]. Numerous experimental results show that MEE can outperform MCC in many situations although its computational complexity is a little higher [39, 40]. In addition, the superior performance and robustness of MEE have been proved in . The goal of this work is to develop a new Kalman-type filter, called minimum error entropy Kalman filter (MEE-KF), by using the MEE as the optimality criterion. The proposed filter uses the propagation equations to obtain the prior estimates of the state and covariance matrix, and a fixed-point algorithm to update the posterior estimates and covariance matrix, recursively and online. To further improve the performance in the nonlinear situations, the MEE criterion is also incorporated into EKF, resulting in minimum error entropy extended Kalman filter (MEE-EKF).
The rest of the paper is organized as follows. In section II, we briefly review the KF algorithm and MEE criterion. In section III, we develop the MEE-KF algorithm. Sections IV and V provide the computational complexity and convergence analysis, respectively. In section VI, the MEE-EKF is developed. The experimental results are presented in section VII and finally, the conclusion is given in section VIII.
2.1 Kalman Filter
Consider a linear dynamic system with unknown state vectorand available measurement vector . To estimate the state , Kalman filter (KF) assumes a state space model described by the following state and measurement equations:
where and are the state-transition matrix and measurement matrix, respectively. Here, the process noise and measurement noise are mutually independent, and satisfy
where and are the covariance matrices of and , respectively. In general, the KF includes two steps:
(1) Predict: The a-priori estimate and the corresponding error covariance matrix are calculated by
(2) Update: The a-posteriori estimate and the corresponding error covariance matrix are obtained by
where is the Kalman filter gain.
2.2 Minimum Error Entropy Criterion
where is the order of Renyi’s entropy, and denotes the information potential defined by
is the probability density function (PDF) of errorand denotes the expectation operator. In practical applications, the PDF can be estimated by Parzen’s window approach :
Since the negative logarithmic function is monotonically decreasing, minimizing the error entropy means maximizing the information potential .
3 Minimum Error Entropy Kalman Filter
3.1 Augmented Model
First, we denote the state prediction error as
Combining the above state prediction error with the measurement equation (2), one can obtain an augmented model
where denotes a identity matrix, and
is the augmented noise vector comprising of the state and measurement errors. Assuming that the covariance matrix of the augmented noise is positive definite, we have
where , and are obtained by the Cholesky decomposition of , and , respectively. Multiplying both sides of (16) by gives
with , , and .
3.2 Derivation of MEE-KF
Based on (14), the cost function of MEE-KF is given by
Then, the optimal solution to is achieved by maximizing the cost function (23), that is
Setting the gradient of the cost function regarding to zero, we have
From (3.2), can be solved by a fixed-point iteration:
where . The explicit expressions of , , and are
By using the matrix inversion lemma
with the identifications
one can reformulate (3.2) as
Then, the posterior covariance matrix can be updated by
With the above derivations, the proposed MEE-KF algorithm can be summarized as Algorithm 1.
Step 1: Initialize the state priori estimate and state prediction error covariance matrix ; set a proper kernel size and a small positive number .
Step 2: Use Eqs. (6) and (7) to obtain and , respectively; use the Cholesky decomposition of to obtain and ; use Eqs. (20) and (21) to obtain and , respectively.
Step 3: Let and , where denotes the estimated state at the fixed-point iteration .
Step 4: Use available measurements to update:
Step 5: Compare and
If the above condition holds, set and continue to Step 6. Otherwise, , and return to Step 4.
Step 6: Update and the posterior error covariance matrix by
and return to Step 2.
4 Computational Complexity
This section provides the comparison of the computational complexities of KF, maximum correntropy Kalman filter (MCKF)  and MEE-KF in terms of the floating point operations.
According to , the computational complexity of MCKF is
where denotes the fixed-point iteration number, which is relatively small in general as shown in simulations in Section VII.
The MEE-KF has an additional computational burden induced by the error entropy functions in comparison to KF, and has a slightly higher computational complexity than MCKF. In the sense of order of magnitude, the computational complexities of the MEE-KF, MCKF and KF have no significant difference.
5 Convergence Issue
This section provides a sufficient condition to ensure the convergence of the fixed point iterations in MEE-KF, where the proof is similar to  and thus will not be provided here.
First, from Eq. (3.2), we can rewrite
with and .
Thus, a Jacobian matrix of with respect to gives
with . Define as an