I Introduction
Multitarget tracking is a challenging problem for surveillance system. It is not only need estimate a set of dynamic target state but also targets numbers and types. Recently, probability hypothesis density (PHD) filter has received much attention [1, 2, 3, 4, 5, 6]. Compared with traditional associationbased multitarget tracking approaches like joint probability data association (JPDA) and multiple hypothesis tracking (MHT), PHD filter avoids the data association between the measurement and track due to measurements and false alarms represented as random sets. The PHD filter is capable of dealing with the problem of target birth, spawning, disappearance in dense clutters.
Currently, there are two kinds of PHD implementations: sequential Monte Carlo PHD [3] and Gaussian mixture PHD filter [4]. Compared with Gaussian mixture PHD filter, the sequential Monte Carlo approximation is capable to handle the nonlinear nonGaussian problems. To track maneuvering target, multimodel sequential Monte Carlo PHD filter are introduced in . Smoothing technique uses the observation in the future to improve the current state estimation precision. [7] derives the PHD smoothing based on finite set statistics method, respectively. It also can be extended to multimodel cases and the multimodel PHD smoothing is obtained. However, Mahler points out that existing works have adopted a bottomup theoretical approach. That is, they take the PHD filter or the CPHD filter as starting point, and then attempt to generalize it to jumpMarkov systems. None have adopted a theoretically topdown approach, which begin with the multitarget Bayes filter as the starting point; generalize it to a multitarget jumpMarkov filter; and, then and only then, derive PHD filter equations from this generalized filter. As a result, it is unclear whether any of these proposed jump Markov PHD filters are fully rigorous from a multitargetstatistics point of view. [8] derived a JMNS version of the PHD filter on multitarget jumpMarkov systems through a topdown method.
The feature or class information of the target is useful to improve the performance of tracking [9]. By incorporating the feature information to particle filter, JPDA and MHT, these methods are more efficient to track closely spaced parallel moving target or crossing moving targets from different classes. Random finite set (RFS) theory also provides an efficient tool to incorporate the feature information. If the feature measurements are considered, RFSbased PHD filter can also be applied to joint detection, tracking and classification [10, 11, 12, 13]. Yang [1] proposed to assign a classmatched PHDlike filter to each type of target, which has a classdependent kinematic model set to describe the kinematic feature of targets precisely.
PHD smoothing improves the performance of PHD filter. However, to the best of our knowledge, the PHD smoothing with classification information has not been considered yet in literature. Therefore, we try to deal with this issue in this paper and derive the classificationaided PHD smoothing. Specifically, we derived the general feature conditioned forwardbackward PHD smoothing through the “topdown” method.
In this paper, we make the following contributions:

The forwardbackward PHDJDTC smoothing is proposed using a ”topdown” approach, which can effectively deal with the multitarget joint detection, tracking and classification problem.

The Sequencial Monte Carlo (SMC) implementation for forwardbackward PHDJDTC smoothing is presented and the stucture of PHDJDTC smoothing is analyized.

We propose to utilize the signal amplitude of the unknown SNR target for JDTC problem and avoiding the a priori information of the target average SNR. The simulation results show that our approach can make decision on class information effectively and outperform the stateofart approach — PHDJDTC filter in target tracking accuracy.
This paper is organized as follows. First, Section II provides a brief review of the forwardbackward PHD smoothing based on Random Finite Set theory. It also presents the forward PHD filter with classdependent kinematic model set. In Section III, PHD filter and smoothing with general feature are derived from a “topdown” approach. In Section IV, the forwardbackward PHDJDTC smoothing is derived by considering the class information. In addition, its Sequential Monte Carlo implementation is provided. In Section V, simulation case is designed and the evaluation results shows the performance of proposed approach. Finally, conclusion and future work are given in Section VI.
Ii RFSbased forwardbackward PHD smoothing
In multitarget tracking, both the target number and states are random, as well as the number of measurements and the measurements themselves. Therefore the states and measurements could be model by Random Finite Set. This section reviews the forwardbackward PHD smoothing, which provides the basis for deriving the forwardbackward PHD smoothing with augmented general mode. In Subsection IIA, the Random Finite Set are used to model the multitarget states and observations. The multitarget Bayes filter and smoothing are given. Its firstorder approximate, PHD smoothing algorithm is outlined in Subsection IIB.
Iia Multitarget Bayes forwardbackward smoothing
Assume that at time , single target state belongs to the state space , i.e. , then the multitarget states can be defined as follows:
(1) 
Suppose that the single target observation space is , single target observation at time is , the multistates is given by
(2) 
where and are the number of targets and their measurements, respectively, while and are the finite subsets of and , respectively.
For the measurements from time 1 to , the aggregate of all of them can be expressed as
(3) 
In the framework of FISST (Finite Set Statistics), the uncertainty for multitarget states and the corresponding observations are represented by random finite sets.
Forwardbackward smoothing consists of forward filtering followed by backward smoothing. In the forward filtering, the posterior density is propagated forward to time k via Bayes recursion. In the backward smoothing step, the smoothed density is propagated backward, from time k¡k, via the backward smoothing recursion. Analogy to the single target Bayes filter predictor, multitarget forward predication is calculated as follows Forwardbackward smoothing consists of forward filtering followed by backward smoothing. In the forward filtering, the posterior density is propagated forward to time via Bayes recursion. In the backward smoothing step, the smoothed density is propagated backward, from time k¡k, via the backward smoothing recursion. Analogy to the single target Bayes filter predictor, multitarget forward predication is calculate as follows,
(4) 
where and represents random set integral and differential. is the multitarget Makov density.
With the measurements from time , multitarget forward update is given by,
(5) 
where is the multisource likelihood function.
The smoothed multitarget density is propagated backward, from time to , via the multitarget backward smoothing recursion.
(6) 
IiB PHD filter and smoothing
For multitarget Bayes filter, it is intractable to implement in a computational manner. Under the assumption that no target generates more than one measurement and each measurement is generated by no more than a single target, all measurements are conditionally independent of target state, missed detections, and a multiobject Poisson false alarm process, Mahler [10]
proposed firstorder multitarget moment approximation for multitarget Bayes filter Probability Hypothesis Density (PHD). Given any region
of singletarget state space , the integral is the expected number of targets in . In particular, if is the entire state space then is the total expected number of targets in the scene.Compared with optimal multitarget tracking Bayes recursion, PHD filter is much easier because of its firstorder multitarget moment approximation. On the other hand, the computational complexity is small as the integral of PHD filter is performed on the single target space.
PHD forwardbackward smoothing could be derived from physicalspace approach [14] and standard point process theory [5], respectively.
PHD forward filtering. In prediction step,
(7) 
where , is the intensity of birth targets at time , is the intensity of spawned targets. is the survive probability for existing targets.
The state is updatd then
(8) 
where is the probability of detection for targets and is the intensity for clutters.
PHD backward smoothing. Smoothing provides more improved estimation results than filtering due to that it make use of more measurements. There are mainly three kinds of smoothing techniques: fixedinterval smoothing, fixedpoint smoothing and fixedlag smoothing. Fixed lag smoothing estimate the state at time given measurements until for fixedtime lag. Here we consider the fixed lag smoothing for multitarget Bayes backward recursion.
(9) 
where
. It should be noted that the backward recursion is initialized with the filtering results at the present time and stopped at time , where is the time lag of the smoothing algorithm.
Iii forwardbackward PHD filter with general mode
The PHD smoothing algorithm in section II only take into account the kinematic state of targets, which is not able to handle the issue of maneuvering target tracking or joint detection, tracking and classification. To tackle these problems, it is necessary to append the general mode, which might be the kinematic mode, the classification or the combination of them, to the target states. The general mode can be viewed as general jump Markov state, which is just related with the state at last time step while be independent with states of earlier time steps. In this section, the forwardbackward PHD smoothing whose states extended with general jump Markov mode are given in details. First the forward extended PHD filter will be outlined in subsection IIIA , with the backward extended PHD smoothing recursion derived in subsection IIIB.
Iiia Forward PHD filter with general mode
Proposition 1
Kinematic mode and class information could be sort as general jump Markov mode.
The single target state space consists of augmented states of the form , based on which the multitarget state have the form of , where is the general mode with jump Markov feature. Substitute with for the PHD prediction and update equation, then we get the following expressions.
(10) 
where . The prediction step is
(11) 
where .
The corresponding expanded full jumpvariable notation will be:
(12) 
where
(13) 
(14) 
(15) 
Integrate out mode and the PHD for target we can get multitarget state estimation,
(16) 
And the expected target number is .
IiiB Backward PHD filter with general mode
Proposition 2
General mode extended PHD smoothing can be achieved.
After we get the estimation of target state at time we can use the do backward smoothing by extending to
(17) 
Where
(18) 
Proof: Substitute with in Eq. 9, we get
(19) 
where
(20) 
Then, substitute to the above equation.
Note that if we use to replace , we will get the multimodel PHD filter and multimodel PHD smoothing, which is consistant with the work in [14].
Iv forwardbackward PHD filter with class information
In this section, we will first introduce the PHDJDTC filter and smoothing and then give the SMC implementation for it.
Iva PHDJDTC Filter
The jump Markov PHD filter is mainly designed for the case where the motion pattern of target change. In fact, however, the class information can also be viewed as a jump Markov variable. Specifically, the existing target is not changing with time, which is a special type of jump Markov variable. However, for the birth target, it might be different the parent target. For instance, a missile is launched from aircraft. In [1], the class conditional PHDJDTC filter is derived based on the addition of point process instensity function. In this section, we will show that PHDJDTC filter can be easily derived by the ”topdown” approach based on PHD filter with general mode.
Proposition 3
Through the general mode extended method, the joint detection, tracking and classification PHD algorithm will be achieved via a “topdown” way.
Let’s treat the class as a special kind of “mode” which is modeled as JumpMarkov model. If we augment the state
with class information , the classconditioned PHD filter can be reached by substituting by within Equation 9.(21) 
(22) 
where
(23) 
(24) 
Assume that the class of existing target does not change with time, i.e.
(25) 
However, for the type of spawning target, it might be a timevariant jump Markov state.
The equation now is in the following form: In the state prediction step,
(26) 
Observation is updated as follows,
(27) 
The number of target can be calculated by summarize all the classes, i.e.,
(28) 
Finally, we get the formulation of PHDJDTC filter.
IvB PHDJDTC smoothing
Proposition 4
The classconditioned backward PHD filter is,
(29) 
where
(30) 
Proof: we extend by adding the class information , i.e. and substuting to Equation 9. We get
(31) 
where
(32) 
Assume that the type of existing target does not change with time. Using Equation 26 we can get the PHDJDTC smoothing.
The final state estimation with smoothing will be
(33) 
The number of target after smooth is
(34) 
Remark: In the stage of smoothing, different kind of targets could be processed in the classconditioned PHD smoothings. On the other hand, it can also be seen that there is information interaction in the proposed forwardbackward PHD smoothing algorithm. In the forward filtering step, the spawning targets has mutual information exchange in the prediction phase, while different target interact with others by the joint likelihood function in update stage. It also has the information exchange in the backward smoothing due to that the spawning targets might be different with the target they spawned from.
IvC SMC implementation
Based on the equations above, the forwardbackward smoothing can be expressed with an explicit structure, from Fig. 1 we can see the mutual information exchange between targets from different classes.
Next, we will present how to implement PHDJDTC smoothing with SMC. The filtering step can be divided into three steps: particle prediction, update and resampling.
Assume the state vector of particle with class and model information is
(35) 
Step 1: assume at time , the PHD is . The targets of class can be represented by equal weighted particles, , the state of targets in different classes will update based on the state transfer model.
For the existing targets, the particles are directly update their state within their own PHD filter, then the predicted class for those particles is
(36) 
where is the class for the original targets. is number of particles for spawn targets. Usually, the recommended distribution is set as same as the transfer probability of spawn targets . The weight of spawn target is
(37) 
For the manuver target, the prediction of model will use the same random sampling approach. The predicted particles are sampled in distribution . The weight of particles then becomes
(38) 
After the prediction step for all classes of particles, the particles for each class is represented as .
Step 2: Particle state upddate
(39) 
where
(40) 
is the state measurement likelyhood function, and is the class measurement likelyhood function.
Step 3:Particle resampling After the state update, the corresponding particles of each category are resampled within the category to avoid particle depletion. Finally we get the updated particle collection
Step 4: Particle smoothing. We denote the Probability Hypophisis Density as follows,
(41) 
For target that belongs to class , we have PHD as
(42) 
For steps
(43) 
where
(44) 
The output of this step is a new set of particles . If we neglect the spawn targets, it is easy to see the PHDJDTC smoothing is independent with each other across different class of targets.
Step 5: Resampling after smoothing. To avoid the depletion of particles, we resample the particles within each class again. Finally, the ”smoothed” particles belongs to class of time becomes
Step 6: State extraction and ordinal estimation. Then, the intraclass particle clustering is performed on different kinds of particles, and the motion state of the multitarget after smoothing is extracted, and all the particle weights are summed together, and the estimated number of target after smoothing is achieved.
Since the smoothing process involves a large number of interactions between particles, the computational complexity becomes higher. In order to achieve realtime requirements, the kdtree method can be used here. Kd tree method considers the case where a large number of particles are densely distributed in space, and the ”grouptogroup” method can greatly increase the computation speed within tolerated error.
V Evaluation
In order to verify the performance of the PHDJDTC forwardbackward smoothing proposed, we compare it with PHDJDTC filter in target crossover and parallel motion scenarios.
The attribute information of the target in [1] is extracted from the signaltonoise ratio of the radar measurement signal, which is assumed to be known. In real practice, however, the target true average signaltonoise ratio is usually unknown or it varies within a certain interval. In [15], the likelihood function of the unknown SNR target is obtained by integrating the interval in the possible signaltonoise ratio interval of the target. In this paper, we first divide the possible signaltonoise ratio (SNR) of different categories of targets, and then use the likelihood function of the unknown SNR target as the clutter and the attribute likelihood function of each class target.
Suppose there are two different categories of targets in the surveillance area. The average SNR ratio of the targets is and . We first divide the SNR interval into a high SNR interval and a low SNR interval . Using the Rayleigh distribution amplitude model, the intensity of the signal is obtained using an envelope detector. The signal amplitude probability density of clutter and different categories of targets are
(45) 
(46) 
is the target signal amplitude, defines the possible average signaltonoise ratio interval of a certain category of targets. The false alarm probability is
(47) 
, the target detection probability is
(48) 
and is signal detection threshold.
The normalized probability density function over the signal detection threshold
can be normalized, and the clutter and attribute likelihood functions of the different classes of targets become:(49) 
Comments
There are no comments yet.