Joint Target Detection, Tracking and Classification with Forward-Backward PHD Smoothing

12/06/2018 ∙ by Yanyuan Qin, et al. ∙ 0

Forward-backward Probability Hypothesis Density (PHD) smoothing is an efficient way for target tracking in dense clutter environment. Although the target class has been widely viewed as useful information to enhance the target tracking, there is no existing work in literature which incorporates the feature information into PHD smoothing. In this paper, we generalized the PHD smoothing by extending the general mode, which includes kinematic mode, class mode or their combinations etc., to forward-backward PHD filter. Through a top-down method, the general mode augmented forward-backward PHD smoothing is derived. The evaluation results show that our approach out-performs the state-of-art joint detection, tracking and classification algorithm in target state estimation, number estimation and classification. The reduction of OSPA distance is up to 40

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 7

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Multi-target tracking is a challenging problem for surveillance system. It is not only need estimate a set of dynamic target state but also targets numbers and types. Recently, probability hypothesis density (PHD) filter has received much attention [1, 2, 3, 4, 5, 6]. Compared with traditional association-based multi-target tracking approaches like joint probability data association (JPDA) and multiple hypothesis tracking (MHT), PHD filter avoids the data association between the measurement and track due to measurements and false alarms represented as random sets. The PHD filter is capable of dealing with the problem of target birth, spawning, disappearance in dense clutters.

Currently, there are two kinds of PHD implementations: sequential Monte Carlo PHD [3] and Gaussian mixture PHD filter [4]. Compared with Gaussian mixture PHD filter, the sequential Monte Carlo approximation is capable to handle the non-linear non-Gaussian problems. To track maneuvering target, multi-model sequential Monte Carlo PHD filter are introduced in . Smoothing technique uses the observation in the future to improve the current state estimation precision.  [7] derives the PHD smoothing based on finite set statistics method, respectively. It also can be extended to multi-model cases and the multi-model PHD smoothing is obtained. However, Mahler points out that existing works have adopted a bottom-up theoretical approach. That is, they take the PHD filter or the CPHD filter as starting point, and then attempt to generalize it to jump-Markov systems. None have adopted a theoretically top-down approach, which begin with the multi-target Bayes filter as the starting point; generalize it to a multi-target jump-Markov filter; and, then and only then, derive PHD filter equations from this generalized filter. As a result, it is unclear whether any of these proposed jump- Markov PHD filters are fully rigorous from a multi-target-statistics point of view.  [8] derived a JMNS version of the PHD filter on multi-target jump-Markov systems through a top-down method.

The feature or class information of the target is useful to improve the performance of tracking [9]. By incorporating the feature information to particle filter, JPDA and MHT, these methods are more efficient to track closely spaced parallel moving target or crossing moving targets from different classes. Random finite set (RFS) theory also provides an efficient tool to incorporate the feature information. If the feature measurements are considered, RFS-based PHD filter can also be applied to joint detection, tracking and classification [10, 11, 12, 13]. Yang [1] proposed to assign a class-matched PHD-like filter to each type of target, which has a class-dependent kinematic model set to describe the kinematic feature of targets precisely.

PHD smoothing improves the performance of PHD filter. However, to the best of our knowledge, the PHD smoothing with classification information has not been considered yet in literature. Therefore, we try to deal with this issue in this paper and derive the classification-aided PHD smoothing. Specifically, we derived the general feature conditioned forward-backward PHD smoothing through the “top-down” method.

In this paper, we make the following contributions:

  • The forward-backward PHD-JDTC smoothing is proposed using a ”top-down” approach, which can effectively deal with the multi-target joint detection, tracking and classification problem.

  • The Sequencial Monte Carlo (SMC) implementation for forward-backward PHD-JDTC smoothing is presented and the stucture of PHD-JDTC smoothing is analyized.

  • We propose to utilize the signal amplitude of the unknown SNR target for JDTC problem and avoiding the a priori information of the target average SNR. The simulation results show that our approach can make decision on class information effectively and outperform the state-of-art approach — PHD-JDTC filter in target tracking accuracy.

This paper is organized as follows. First, Section II provides a brief review of the forward-backward PHD smoothing based on Random Finite Set theory. It also presents the forward PHD filter with class-dependent kinematic model set. In Section III, PHD filter and smoothing with general feature are derived from a “top-down” approach. In Section  IV, the forward-backward PHD-JDTC smoothing is derived by considering the class information. In addition, its Sequential Monte Carlo implementation is provided. In Section V, simulation case is designed and the evaluation results shows the performance of proposed approach. Finally, conclusion and future work are given in Section VI.

Ii RFS-based forward-backward PHD smoothing

In multi-target tracking, both the target number and states are random, as well as the number of measurements and the measurements themselves. Therefore the states and measurements could be model by Random Finite Set. This section reviews the forward-backward PHD smoothing, which provides the basis for deriving the forward-backward PHD smoothing with augmented general mode. In Subsection II-A, the Random Finite Set are used to model the multi-target states and observations. The multi-target Bayes filter and smoothing are given. Its first-order approximate, PHD smoothing algorithm is outlined in Subsection II-B.

Ii-a Multi-target Bayes forward-backward smoothing

Assume that at time , single target state belongs to the state space , i.e. , then the multi-target states can be defined as follows:

(1)

Suppose that the single target observation space is , single target observation at time is , the multi-states is given by

(2)

where and are the number of targets and their measurements, respectively, while and are the finite subsets of and , respectively.

For the measurements from time 1 to , the aggregate of all of them can be expressed as

(3)

In the framework of FISST (Finite Set Statistics), the uncertainty for multi-target states and the corresponding observations are represented by random finite sets.

Forward-backward smoothing consists of forward filtering followed by backward smoothing. In the forward filtering, the posterior density is propagated forward to time k via Bayes recursion. In the backward smoothing step, the smoothed density is propagated backward, from time k¡k, via the backward smoothing recursion. Analogy to the single target Bayes filter predictor, multi-target forward predication is calculated as follows Forward-backward smoothing consists of forward filtering followed by backward smoothing. In the forward filtering, the posterior density is propagated forward to time via Bayes recursion. In the backward smoothing step, the smoothed density is propagated backward, from time k¡k, via the backward smoothing recursion. Analogy to the single target Bayes filter predictor, multi-target forward predication is calculate as follows,

(4)

where and represents random set integral and differential. is the multi-target Makov density.

With the measurements from time , multi-target forward update is given by,

(5)

where is the multisource likelihood function.

The smoothed multi-target density is propagated backward, from time to , via the multi-target backward smoothing recursion.

(6)

Ii-B PHD filter and smoothing

For multi-target Bayes filter, it is intractable to implement in a computational manner. Under the assumption that no target generates more than one measurement and each measurement is generated by no more than a single target, all measurements are conditionally independent of target state, missed detections, and a multi-object Poisson false alarm process, Mahler [10]

proposed first-order multi-target moment approximation for multi-target Bayes filter- Probability Hypothesis Density (PHD). Given any region

of single-target state space , the integral is the expected number of targets in . In particular, if is the entire state space then is the total expected number of targets in the scene.

Compared with optimal multi-target tracking Bayes recursion, PHD filter is much easier because of its first-order multi-target moment approximation. On the other hand, the computational complexity is small as the integral of PHD filter is performed on the single target space.

PHD forward-backward smoothing could be derived from physical-space approach [14] and standard point process theory [5], respectively.

PHD forward filtering. In prediction step,

(7)

where , is the intensity of birth targets at time , is the intensity of spawned targets. is the survive probability for existing targets.

The state is updatd then

(8)

where is the probability of detection for targets and is the intensity for clutters.

PHD backward smoothing. Smoothing provides more improved estimation results than filtering due to that it make use of more measurements. There are mainly three kinds of smoothing techniques: fixed-interval smoothing, fixed-point smoothing and fixed-lag smoothing. Fixed lag smoothing estimate the state at time given measurements until for fixed-time lag. Here we consider the fixed lag smoothing for multi-target Bayes backward recursion.

(9)

where

. It should be noted that the backward recursion is initialized with the filtering results at the present time and stopped at time , where is the time lag of the smoothing algorithm.

Iii forward-backward PHD filter with general mode

The PHD smoothing algorithm in section II only take into account the kinematic state of targets, which is not able to handle the issue of maneuvering target tracking or joint detection, tracking and classification. To tackle these problems, it is necessary to append the general mode, which might be the kinematic mode, the classification or the combination of them, to the target states. The general mode can be viewed as general jump Markov state, which is just related with the state at last time step while be independent with states of earlier time steps. In this section, the forward-backward PHD smoothing whose states extended with general jump Markov mode are given in details. First the forward extended PHD filter will be outlined in subsection III-A , with the backward extended PHD smoothing recursion derived in subsection III-B.

Iii-a Forward PHD filter with general mode

Proposition 1

Kinematic mode and class information could be sort as general jump Markov mode.

The single target state space consists of augmented states of the form , based on which the multi-target state have the form of , where is the general mode with jump Markov feature. Substitute with for the PHD prediction and update equation, then we get the following expressions.

(10)

where . The prediction step is

(11)

where .

The corresponding expanded full jump-variable notation will be:

(12)

where

(13)
(14)
(15)

Integrate out mode and the PHD for target we can get multi-target state estimation,

(16)

And the expected target number is .

Iii-B Backward PHD filter with general mode

Proposition 2

General mode extended PHD smoothing can be achieved.

After we get the estimation of target state at time we can use the do backward smoothing by extending to

(17)

Where

(18)

Proof: Substitute with in Eq. 9, we get

(19)

where

(20)

Then, substitute to the above equation.

Note that if we use to replace , we will get the multi-model PHD filter and multi-model PHD smoothing, which is consistant with the work in [14].

Iv forward-backward PHD filter with class information

In this section, we will first introduce the PHD-JDTC filter and smoothing and then give the SMC implementation for it.

Iv-a PHD-JDTC Filter

The jump Markov PHD filter is mainly designed for the case where the motion pattern of target change. In fact, however, the class information can also be viewed as a jump Markov variable. Specifically, the existing target is not changing with time, which is a special type of jump Markov variable. However, for the birth target, it might be different the parent target. For instance, a missile is launched from aircraft. In [1], the class conditional PHD-JDTC filter is derived based on the addition of point process instensity function. In this section, we will show that PHD-JDTC filter can be easily derived by the ”top-down” approach based on PHD filter with general mode.

Proposition 3

Through the general mode extended method, the joint detection, tracking and classification PHD algorithm will be achieved via a “top-down” way.

Let’s treat the class as a special kind of “mode” which is modeled as Jump-Markov model. If we augment the state

with class information , the class-conditioned PHD filter can be reached by substituting by within Equation 9.

(21)
(22)

where

(23)
(24)

Assume that the class of existing target does not change with time, i.e.

(25)

However, for the type of spawning target, it might be a time-variant jump Markov state.

The equation now is in the following form: In the state prediction step,

(26)

Observation is updated as follows,

(27)

The number of target can be calculated by summarize all the classes, i.e.,

(28)

Finally, we get the formulation of PHD-JDTC filter.

Iv-B PHD-JDTC smoothing

Proposition 4

The class-conditioned backward PHD filter is,

(29)

where

(30)

Proof: we extend by adding the class information , i.e. and substuting to Equation 9. We get

(31)

where

(32)

Assume that the type of existing target does not change with time. Using Equation  26 we can get the PHD-JDTC smoothing.

The final state estimation with smoothing will be

(33)

The number of target after smooth is

(34)

Remark: In the stage of smoothing, different kind of targets could be processed in the class-conditioned PHD smoothings. On the other hand, it can also be seen that there is information interaction in the proposed forward-backward PHD smoothing algorithm. In the forward filtering step, the spawning targets has mutual information exchange in the prediction phase, while different target interact with others by the joint likelihood function in update stage. It also has the information exchange in the backward smoothing due to that the spawning targets might be different with the target they spawned from.

Iv-C SMC implementation

Based on the equations above, the forward-backward smoothing can be expressed with an explicit structure, from Fig. 1 we can see the mutual information exchange between targets from different classes.

Fig. 1: Recursive forward-backward PHD JDTC smoothing

Next, we will present how to implement PHD-JDTC smoothing with SMC. The filtering step can be divided into three steps: particle prediction, update and resampling.

Assume the state vector of particle with class and model information is

(35)

Step 1: assume at time , the PHD is . The targets of class can be represented by equal weighted particles, , the state of targets in different classes will update based on the state transfer model.

For the existing targets, the particles are directly update their state within their own PHD filter, then the predicted class for those particles is

(36)

where is the class for the original targets. is number of particles for spawn targets. Usually, the recommended distribution is set as same as the transfer probability of spawn targets . The weight of spawn target is

(37)

For the manuver target, the prediction of model will use the same random sampling approach. The predicted particles are sampled in distribution . The weight of particles then becomes

(38)

After the prediction step for all classes of particles, the particles for each class is represented as .

Step 2: Particle state upddate

(39)

where

(40)

is the state measurement likelyhood function, and is the class measurement likelyhood function.

Step 3:Particle resampling After the state update, the corresponding particles of each category are re-sampled within the category to avoid particle depletion. Finally we get the updated particle collection

Step 4: Particle smoothing. We denote the Probability Hypophisis Density as follows,

(41)

For target that belongs to class , we have PHD as

(42)

For steps

(43)

where

(44)

The output of this step is a new set of particles . If we neglect the spawn targets, it is easy to see the PHD-JDTC smoothing is independent with each other across different class of targets.

Step 5: Resampling after smoothing. To avoid the depletion of particles, we resample the particles within each class again. Finally, the ”smoothed” particles belongs to class of time becomes

Step 6: State extraction and ordinal estimation. Then, the intra-class particle clustering is performed on different kinds of particles, and the motion state of the multi-target after smoothing is extracted, and all the particle weights are summed together, and the estimated number of target after smoothing is achieved.

Since the smoothing process involves a large number of interactions between particles, the computational complexity becomes higher. In order to achieve real-time requirements, the kd-tree method can be used here. K-d tree method considers the case where a large number of particles are densely distributed in space, and the ”group-to-group” method can greatly increase the computation speed within tolerated error.

V Evaluation

In order to verify the performance of the PHD-JDTC forward-backward smoothing proposed, we compare it with PHD-JDTC filter in target crossover and parallel motion scenarios.

The attribute information of the target in [1] is extracted from the signal-to-noise ratio of the radar measurement signal, which is assumed to be known. In real practice, however, the target true average signal-to-noise ratio is usually unknown or it varies within a certain interval. In [15], the likelihood function of the unknown SNR target is obtained by integrating the interval in the possible signal-to-noise ratio interval of the target. In this paper, we first divide the possible signal-to-noise ratio (SNR) of different categories of targets, and then use the likelihood function of the unknown SNR target as the clutter and the attribute likelihood function of each class target.

Suppose there are two different categories of targets in the surveillance area. The average SNR ratio of the targets is and . We first divide the SNR interval into a high SNR interval and a low SNR interval . Using the Rayleigh distribution amplitude model, the intensity of the signal is obtained using an envelope detector. The signal amplitude probability density of clutter and different categories of targets are

(45)
(46)

is the target signal amplitude, defines the possible average signal-to-noise ratio interval of a certain category of targets. The false alarm probability is

(47)

, the target detection probability is

(48)

and is signal detection threshold.

The normalized probability density function over the signal detection threshold

can be normalized, and the clutter and attribute likelihood functions of the different classes of targets become:

(49)