Kalman Filtering With Censored Measurements

02/20/2020 ∙ by Kostas Loumponias, et al. ∙ ARISTOTLE UNIVERSITY OF THESSALONIKI 0

This paper concerns Kalman filtering when the measurements of the process are censored. The censored measurements are addressed by the Tobit model of Type I and are one-dimensional with two censoring limits, while the (hidden) state vectors are multidimensional. For this model, Bayesian estimates for the state vectors are provided through a recursive algorithm of Kalman filtering type. Experiments are presented to illustrate the effectiveness and applicability of the algorithm. The experiments show that the proposed method outperforms other filtering methodologies in minimizing the computational cost as well as the overall Root Mean Square Error (RMSE) for synthetic and real data sets.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Kalman filter (KF) [1]

has been the subject of extensive research and application, particularly in the area of object tracking and vehicle navigation. The KF algorithm provides optimal estimates for hidden state vectors under the assumption that the measurements given the state vectors are normally distributed and the corresponding state-space model is linear. However, in many real life problems, the state-space model is non-linear, therefore, the KF process has a poor performance. Many methods have been proposed in order to overcome these drawbacks of the standard KF, such as the Extended Kalman Filter (EKF)

[2], the Unscented Kalman Filter (UKF) [3],[4] etc.

A kind of non-linearity in state-space models is due to censorship in the measurements [5], [6], where methods such as EKF and UKF cannot cope in optimal way with the censored measurements. In what follows we will deal with this kind of non-linear state-space models, i.e., models with censored measurements. The use of statistics for censored data in filtering problems has received an increased attention over the last years [7],[8]. In [7], the censored measurements are treated as missing measurements, thus, only the state prediction (an a priori estimation) is utilized when a measurement is censored.

In [8], [9], the Tobit Kalman Filter (TKF) was proposed in order to estimate recursively the state vector, given the censored measurement. The censored measurements are addressed by the Tobit model of Type I with two censoring limits [10]. TKF provides unbiased and recursive estimates of the state vectors as a linear combination of the a priori state vector estimation and the associated censored measurement, by taking into account the censoring limits. Furthermore, the TKF process is completely recursive and computationally inexpensive, thus constituting a perfect candidate for investigating real-time applications. Nevertheless, since by the standard TKF algorithm no calculation of the exact covariance matrix of the censored measurements is carried out, it provides non-optimal estimates [9].

In [11]

, an online and real time multi-object tracking (MOT) algorithm based on censored measurements is presented. More specifically, the authors utilize the Adaptive Tobit Kalman Filter (ATKF) in order to estimate the position of the objects. The ATKF process is based on the same framework as TKF, while the two methods have two crucial differences. ATKF provides 1) the exact estimation of the variance of the censored measurements and 2) adaptive censoring limits at each time step, compared to TKF. In

[12],[13], an ATKF was utilized in order to filter spatial coordinates of human skeleton (captured by Kinect Camera [14]), however, there was not used the exact covariance matrix of the censored measurements. Fei Han et al. [15]

deal with TKF for a class of linear discrete-time systems with random parameters. The elements of the state-space matrices are allowed to be random variables in order to reflect reality. Furthermore, they establish a novel weighting covariance formula to address the quadratic terms associated with the random matrices. The method they propose copes with one censoring limit.

The main contributions of this paper is the establishment of a new Censored Kalman filter (CKF) based on the conditional distribution function of the state vector (exhibiting the hidden states) when the measurements are censored. In accordance with other studies dealing with censored measurements [16],[9], we do not derive estimates as a linear combination of the state vector’s a priori estimation and the censored measurement. More specifically, Bayesian estimates [17] are calculated when the measurements lie into the censored area. Furthermore, we cope with i) a multidimensional hidden state vector, ii) one-dimensional censored measurement, and iii) the interval censoring (Type I censoring) [18]

, where a data point belongs into a bounded interval determined by known lower and upper limit. For that purpose we provide: (a) the estimation of the first and second moment of a multidimensional random vector, conditional on an one-dimensional censored normal variable, given that their joint unconditional and uncensored distribution is normal, and (b) an accurate calculation of the associated likelihood function given the censored measurements. The proposed method, CKF, upgrades the standard KF process only when the measurements lie into the censored area. The results show that CKF has a better performance than TKF and KF, and a very low computational cost. Furthermore, CKF can be utilized for multidimensional censored measurements, in the case where the coordinates of any measurements are uncorrelated.

The rest of the paper is organized as follows: In Section 2, Bayesian state estimates conditional on censored measurements are calculated, and the associated CKF algorithm is presented in detail. In Section 3, experimental results are illustrated using artificial and real data (Multi-Object Tracking) to demonstrate the effectiveness and the applicability of the proposed filtering algorithm. Finally, in Section 4, concluding remarks are provided.

2 Censored Kalman Filtering

In this section we deal with the KF process with censored measurements. First, we describe briefly the meaning of censored measurements and the vanilla KF. Next, we calculate in detail the Bayesian estimates for the states at (discrete) time given the measurements (either censored or uncensored) till time , denoted by , or briefly as . Finally, we provide two recursive algorithms to cope with one-dimensional and multidimensional censored measurements, respectively.

2.1 Censored measurements

The KF process uses a series of measurements, , observed over time, containing statistical noise, in order to estimate the set of unknown state vectors, , denoted by . The standard state-space model is given by the equations


where A, H are the transition and observation matrices, respectively, and and stand for the normally distributed noises of the process and the measurement, respectively. While KF provides optimal estimates for the linear state-space model (1)-(2), it turns out that many real life applications are described by non-linear state-space models, and consequently KF cannot cope with them. We notice that in such models the non-linearity often arises from censored measurements, which is the case we will deal with.

Censoring is a condition in which the value of a measurement or observation is only partially known or unknown. In this paper we deal only with the case of partially known measurements. A type of that kind of censoring is the Interval Censoring, where all observations lie in a finite interval. In the case of Interval Censoring, the measurements of the censored state-space model are defined by the relations


where and stand for the censored and latent (uncensored) measurements, respectively, and and are the lower and upper censoring limits, respectively. It is clear by (3), that the censored measurement, , is not normally distributed, while, . Therefore, it is necessary to improve (upgrade) the standard KF in order to deal with the censored measurements.

2.2 Recursive Bayesian Estimations for Censored Measurements

In what follows we will apply Bayesian estimation for estimating the unknown probability distribution function (pdf)

, recursively over time using the incoming measurements, . In the case where the variables involved are normally distributed and the state-space model is linear, as given by (1)-(2), the Bayesian filter becomes the standard KF. Two assumptions are used to derive the recursive Bayesian filter: a) the states follow a first-order Markov process, i.e.,

and b) any measurement at some time does not depend on the previous states (till time ), given the current state (at time ). Using Bayes rule we get that [17],


The aim of Bayes filter is to provide posterior estimates for the states -which are considered to be the conditional means- , and for the covariance matrices, , given the distribution (5).

In what follows, we deal with one-dimensional censored measurements and assume that the random vector given is normally distributed. Next, we provide a lemma where the conditional censored pdf is calculated via the corresponding unconditional censored pdf . For that purpose we will use the following notations: , , is the delta Kronecker function, stands for the normal pdf, is a function taking the value when belongs to the interval and otherwise, stands for the marginal normal distribution function of ,

is the cumulative distribution function of the standard normal distribution,

, are the means of and , respectively, is the variance of , is the covariance matrix of , is the cross-covariance matrix of and , and . Then the following lemma holds:

Lemma 2.1.

The conditional censored pdf can be written in the form


Obviously is given by

from which follows that


In order to calculate the conditional censored pdf via the unconditional one given in (6), we distinguish three cases: 1) , 2) and 3) . In the case where lies into the uncensored region , we have that is normally distributed and, more specifically,


In the case where , or equivalently, , it is derived by (6) that


In the same way, it follows that

We observe that (8) has the same form as (5), and more specifically:

  • stands for the a priori distribution

  • stands for the probability and

  • stands for the propability

Then, the following proposition can be proved.

Proposition 2.2.

For a normally distributed multivariate random variable with mean vector covariance matrix , the following statements hold:

where and stands for the pdf of standard normal distribution.

  1. We derive by (8) that


    where stands for the truncated mean of the r.v. in the interval [19] and is equal with


    Then we get by (9) and (10) that

  2. We have that

    where the second term has been evaluated in the first part of the theorem. Then,


where is the truncated second moment of in the interval [19] and is given by


Thus, (12) can be written by means of (10) and (13) as


Then, we get by (11) and (14) that


In the same way as presented in Proposition 2.2, it can be proved that:

Proposition 2.3.

For a normally distributed multivariate random variable with mean vector covariance matrix , the following statements hold:

where .

We notice that the random vector is not normally distributed; nevertheless, the normality could be accepted for various values of the censoring limit and covariance matrix, . More precisely, this normality condition can be accepted if the value of a is high enough while the correlation coefficient is low. To illustrate this statement, we consider the following example.

Let and , then, in Table 1 the results of K-S tests for various values of censoring limits and correlation coefficients are presented; the sample size of K-S test is . In particular, the values of and are considered in the intervals and with steps and , respectively. As can be seen in Table 1, if then for every value of

the null hypothesis

cannot be rejected, while for very high values of the null hypothesis has to be rejected. Thus, for our example, if , we can accept that the distribution function be approximated by a normal distribution with mean vector (11) and covariance matrix (15). We can get analogous results for the pdf .

Next, in Fig. 1 the distribution function for and , respectively, is presented. Concerning the normality, notice that the conditional pdf for is not symmetric, while for represents approximately a normal distribution.

[0.05,0.75] 0.85 0.95

0 0 1

0 1 1

0 0 1

[1.55, 2.95]
0 0 0

Table 1: K-S tests for the hypothesis, , where 0 and 1 represent acceptance and non-acceptance of , respectively
Figure 1: The distribution function for = and , respectively.

2.3 The Proposed Model

The standard KF process consists of two stages: a) the predict stage and b) the update stage. In the predict stage only the last state vector estimation is used, , in order to calculate the a priori estimation, by (1). The state vector at time given the measurements up to time , , is normally distributed, and then by (1), it is clear that is normally distributed. In the censored KF process described by (3)-(4), is not normally distributed (see Lemma 2.1) when the last measurement, , belongs into the uncensored area; nevertheless, as can be seen in Table 1, if the value of the corelation coefficient is not high, it can be accepted that is (approximately) normal. Therefore, as in the vanilla KF process, the a priori state estimation and the corresponding covariance matrix of the error of the a priori estimation, , are given by


where and are calculated at the previous step, .

At the next step the latent measurement, , is used in order to update . In the case that belongs into the uncensored region , we have that is normally distributed (see (7)). Therefore, the a posteriori estimation, , and the corresponding error covariance matrix, , can be calculated by the standard KF process in optimal way (i.e., unbiased and minimum variance estimation are provided). Thus, the KF algorithm has to be updated for the case where the measurements are censored; to that end, we utilize Propositions 2.2 and 2.3.

In the case where , it is derived by Proposition 2.2 and the state-space model (1), (2) that




Then, for , , and , we get by Proposition 2.2 that




In the same way, when , it is derived that,




where and .

In dealing with real data and censored measurements, the measurement noise of the latent measurement , is usually unknown. In order to overcome this problem, we adopt the assumption that the latent measurement noise is normally distributed (4

) (white noise with constant variance

). Then, we can estimate by means of the likelihood function of the censored measurements . The likelihood function for the censored measurements given in (3), can be calculated by (19) as


Then we get by (24), the following Lemma:

Lemma 2.4.

The likelihood function of the censored normal distribution is given by