## I Introduction

In Bayesian estimation for state-space models, smoothing yields significantly
better estimates than filtering by using the history of the states rather than
the most recent state [18], [6], [10]. Conditional on the observation history, filtering only considers the current
state via the *filtering density*, whereas smoothing considers the
sequence of states up to the current time via the *posterior density*.
Numerical methods for computing the filtering and posterior densities have a
long history and is still an active area of research, see for example
[1], [25], [9],
[10]. Recursive computation of the posterior density is also
known as *smoothing-while-filtering* [6].

A generalisation of state-space models that has attracted substantial interest in recent years is Mahler’s Finite Set Statistics (FISST) framework for multi-object system [14], [15], [16, 17]

. Instead of a vector, the state of a multi-object system at each time, called the

*multi-object state*, is a finite set of vectors. Since its inception, a host of algorithms have been developed for

*multi-object state estimation*[16, 17]. By incorporating labels (or identities), multi-object state estimation provides a state-space formulation of the

*multi-object tracking*problem where the aim is to estimate the number of objects and their trajectories [5, 2, 16]. Numerically, this problem is far more complex than standard state estimation due to additional challenges such as false measurements, misdetection and data association uncertainty.

In multi-object state estimation, the labeled multi-object filtering
recursion admits an analytic solution known as the Generalized Labeled
Multi-Bernoulli (GLMB) filter [28], [30]. Moreover,
this recursion can be implemented with linear complexity in the number of
measurements and quadratic in the number of hypothesized objects [31].
Since the filtering density only considers information on the current
multi-object state, earlier estimates cannot be updated with current data.
Consequently, apart from poorer performance compared to smoothing, an
important drawback in a multi-object context is *track fragmentation*,
where terminated trajectories are picked up again as new evidence from the
data emerges.

In this paper, we extend the GLMB filtering recursion to a (labeled) multi-object posterior recursion. Such posterior captures all information on the set of underlying trajectories and eliminates track fragmentation as well as improving general tracking performance. Specifically, by introducing the multi-scan GLMB model, an analytic multi-object posterior recursion is derived. Interestingly, the multi-scan GLMB recursion takes on an even simpler and more intuitive form than the GLMB recursion. In implementation, however, the multi-scan GLMB recursion is far more challenging. Like the GLMB filter, the multi-scan GLMB filter needs to be truncated, and as shown in this article, truncation by retaining components with highest weights minimizes the truncation error. Unlike the GLMB filter, finding the significant components of a multi-scan GLMB filter is an NP-hard multi-dimensional assignment problem. To solve this problem, we propose an extension of the Gibbs sampler for the 2-D assignment problem in [31] to higher dimensions. The resulting technique can be applied to compute the GLMB posterior off-line in one batch, or recursively as new observations arrive, thereby performing smoothing-while-filtering.

The remainder of this article is divided into 5 Sections. Section II summarizes relevant concepts in Bayesian multi-object state estimation and the GLMB filter. Section III introduces the multi-scan GLMB model and the multi-scan GLMB posterior recursion. Section IV presents an implementation of the multi-scan GLMB recursion using Gibbs sampling. Numerical studies are presented in Section V and conclusions are given in Section VI.

## Ii Background

Following the convention in [28], the list of variables is abbreviated as , and the inner product is denoted by . For a given set , denotes the indicator function of , and denotes the class of finite subsets of . For a finite set , its cardinality (or number of elements) is denoted by , and the product , for some function , is denoted by the multi-object exponential , with . In addition we use

for a generalization of the Kroneker delta that takes arbitrary arguments.

### Ii-a Trajectories and Multi-object States

This subsection summarizes the representation of trajectories via labeled multi-object states.

At time , an existing object is described by a vector and
a unique label , where is the time of birth, and
is a unique index to distinguish objects born at the same time (see
Fig. 1 in [30]). Let denote the label space for
objects born at time , then the label space for all objects up to time
(including those born prior to ) is given by the disjoint union
(note that ). Hence, a *labeled state* at time
is an element of .

A *trajectory* is a sequence of labeled states with a common label, at
consecutive times [28], i.e. a trajectory with label and kinematic states is
the sequence

(1) |

A *labeled multi-object* state at time is a finite subset
of with *distinct
labels*. More concisely, let be the projection defined by
, then has distinct labels if and
only if the *distinct label indicator* equals one. The labeled
states, at time , of a set of trajectories (with distinct labels) is
the labeled multi-object state , where
denotes the labeled state of trajectory at time .

Consider a sequence of labeled multi-object states in the interval . Let denote the element of with label . Then the trajectory in with label is the sequence of states with label :

(2) |

where

(3) |

is the start time of label in the interval , and

(4) |

is the latest time in such that label still exists.

The multi-object state sequence can thus be equivalently represented by the set of all such trajectories, i.e.

(5) |

The left and right hand sides of (5) are simply different groupings of the labeled states on the interval . The multi-object state sequence groups the labeled states according to time while the set of trajectories groups according to labels (see also figure 1 of [30]).

For the rest of the article, single-object states are represented by lowercase letters (e.g. , ), while multi-object states are represented by uppercase letters (e.g. , ), symbols for labeled states and their distributions are bolded to distinguish them from unlabeled ones (e.g. , , , etc).

### Ii-B Bayes recursion

Following the Bayesian paradigm, each labeled multi-object state is modeled as a labeled random finite set (RFS) [28], characterized by the Finite Set Statistic (FISST) multi-object density [14], [27].

Given the observation history , all information on
the set of objects (and their trajectories) is captured in the
*multi-object posterior density, *. Note that the
dependence on is omitted for notational compactness. Similar to
standard Bayesian state estimation [6],[10], the
(multi-object) posterior density can be propagated forward recursively by

(6) |

where is the *multi-object likelihood*
*function* at time , is the
*multi-object transition density* to time , and is the normalising constant, also known as the predictive
likelihood. A valid ensures each surviving
object keeps the same label and dead labels never reappear [28], so
that the multi-object history represents a set of trajectories.

Markov Chain Monte Carlo (MCMC) approximations of the posterior have been proposed in [29] and [8] for detection and image measurements respectively. Combining MCMC with the generic multi-object particle filter [27] has also been suggested in [13].

A cheaper alternative is the *multi-object filtering density*,
, which can be propagated by the
*multi-object Bayes filter* [14, 16]

(7) |

Under the standard multi-object system model, the filtering recursion
(7) admits an analytic solution known as the
*Generalized labeled Multi-Bernoulli* (GLMB) filter [28],
[30]. For a general system model, the generic multi-object
particle filter can be applied, see for example [19].

### Ii-C Multi-object System model

Given a multi-object state (at time ), each state

either survives with probability

and evolves to a new state with probability density or dies with probability . Further, for each in a (finite) birth label space at time , either a new object with state is born with probability and density , or unborn with probability . The multi-object state (at time ) is the superposition of surviving states and new born states, and the multi-object transition density is given by equation (6) in [30]. An alternative form (using multi-scan exponential notation introduced in the next section) is given in subsection III-A.Given a multi-object state , each is either detected with probability
and generates a detection with likelihood or
missed with probability .
The *multi-object observation * is the superposition of the
observations from detected objects and Poisson clutter with intensity
. Assuming that, conditional on , detections are
independent of each other and clutter, the multi-object likelihood function is
given by [28], [30]

where

denotes the set of *positive 1-1* maps (i.e. those that
never *assign distinct arguments to the same positive value*) from
to :, and denotes the subset
of with domain . The map assigns a detected label
to measurement , while for an undetected
label .

### Ii-D GLMB Filtering recursion

Given a state space and a discrete space , a
*generalized labeled multi-Bernoulli* (GLMB) density on has the form [28]:

(8) |

where is a discrete index set, each is a probability density on , i.e., , and each is non-negative with . The GLMB density (8) can be interpreted as a mixture of (labeled) multi-object exponentials.

The GLMB family is closed under the Bayes multi-object filtering recursion
(7) and an explicit expression relating the filtering
density at time to that at time is given by (14) of [31].
This recursion can be expressed in the following form^{1}^{1}1This involves a
straight forward change of notation, but for completeness, details are given
in Appendix VII-B., which complies with, and facilitates
the generalisation to the posterior recursion:

Given the GLMB filtering density

(9) |

at time , GLMB filtering density at time is given by

(10) |

where , , , denotes the domain of ,

(11) | ||||

(12) | ||||

(13) | ||||

(14) | ||||

(15) | ||||

(16) | ||||

(17) | ||||

(18) |

The number of components of the GLMB filtering density grows super-exponentially in time. Truncation by discarding components with small weights minimizes the approximation error in the multi-object density [30]. This can be achieved by solving the ranked assignment problems using Murty’s algorithm or Gibbs sampling [31].

## Iii GLMB Posterior Recursion

In this section we extend the GLMB model to the multi-scan case, and subsequently derive an analytic recursion for the multi-scan GLMB posterior.

### Iii-a Multi-scan GLMB

Recall the equivalence between the multi-object state sequence and the set of trajectories in (5). For any function
taking the trajectories to the non-negative reals we introduce the following
so-called *multi-scan exponential* notation:

(19) |

Note from (2) that a trajectory is completely characterised by and the kinematic states , hence we write and interchangeably.

The multi-scan exponential notation is quite suggestive since , and if the labels of , are disjoint then (see Appendix VII-A for additional properties). It also provides an intuitive expression for the multi-object transition density in [28].

###### Proposition 1

###### Definition 2

A *multi-scan GLMB* density on is defined by

(22) |

where: is a discrete index set; ; ; each , , is non-negative with

(23) |

and each , , is a probability density on , i.e.,

(24) |

It is clear that the multi-scan GLMB (density) reduces to a GLMB (density) when .

Similar to the GLMB, the multi-scan GLMB (22) can be expressed in the so-called -form:

(25) |

where , . Each term or component of a multi-scan GLMB consists of a weight and a multi-scan exponential with label history that matches . The weight can be interpreted as the probability of hypothesis , and for each , is the joint probability density of its kinematic states, given hypothesis .

###### Proposition 3

By setting to 1 in the above proposition, the multi-scan GLMB integrates to 1, and hence, is a FISST density. Some useful statistics from the multi-scan GLMB follows from the above proposition for suitably defined functions of the labels.

###### Corollary 4

The cardinality distribution, i.e. distribution of the number of trajectories is given by

(27) |

###### Corollary 5

The joint probability of existence of a set of trajectories with labels is given by

(28) |

As a special case, the probability of existence of trajectory with label is

(29) |

###### Corollary 6

The distribution of trajectory lengths is given by

(30) |

and the distribution of the length of trajectory with label is

(31) |

Similar to its single-scan counterpart, a number of estimators can be constructed for a multi-scan GLMB. The simplest would be to find the multi-scan GLMB component with the highest weight and compute the most probable or expected trajectory estimate from for each . Alternatively, instead of the most significant, we can use the most significant amongst components with the most probable cardinality (determined by maximizing the cardinality distribution (27)).

Another class of estimators, based on existence probabilities, can be constructed as follows. Find the set of labels with highest joint existence probability by maximizing (28). Then for each determine the most probable length by maximizing (31) and compute the trajectory density

(32) |

from which the most probable or expected trajectory estimate can be determined. Alternatively, instead of the label set with highest joint existence probability, we can use the label set of cardinality with highest joint existence probability. Another option is to find the labels with highest individual existence probabilities and use the same strategy for computing the trajectory estimates.

### Iii-B Multi-scan GLMB Posterior Recursion

Just as the GLMB is closed under the filtering recursion (7), the multi-scan GLMB is closed under the posterior recursion (6). Moreover, the multi-scan GLMB posterior recursion is, in essence, the GLMB filtering recursion without the marginalization of past labels and kinematic states. This is stated more concisely in the following Proposition (see Appendix VII-E proof).

###### Proposition 7

Under the standard multi-object system model, if the multi-object posterior at time is a multi-scan GLMB of the form

(33) |

where , then the multi-object posterior at time is the multi-scan GLMB:

(34) |

where ,

(35) | ||||

(36) | ||||

(37) |

Note that is indeed a probability density since

The multi-scan GLMB posterior recursion (33)-(34) bears remarkable resemblance to the GLMB filtering recursion (9)-(10). Indeed, the weight increments for multi-scan GLMB and GLMB components are identical. Arguably, the multi-scan GLMB recursion is more intuitive because it does not involve marginalization over previous label sets nor past states of the trajectories.

The multi-scan GLMB recursion initiates trajectories for new labels, update trajectories for surviving labels, terminates trajectories for disappearing labels, and stores trajectories that disappeared earlier. Noting that is equivalent to , initiation of trajectories for new labels is identical to that of the GLMB filter. Noting is equivalent to , the update of trajectories for surviving labels is the same as the GLMB filter, but without marginalization of past kinematic states. On the other hand, termination/storing of trajectories for disappearing/disappeared labels are not needed in the GLMB filter.

### Iii-C Cannonical Multi-scan GLMB Posterior

Without summing over the labels nor integrating the probability densities of the trajectories, the canonical expression for the multi-scan GLMB posterior takes on a rather compact form. To accomplish this, we represent each by an extended association map : defined by

(38) |

Let denote the set of positive 1-1 maps from to :, and (with a slight abuse of notation) denote the live labels of , i.e. the domain , by

Then for any , we can recover by for each . It is clear that there is a bijection between and , and hence can be completely represented by .

Starting with an empty initial posterior , by iteratively applying Proposition 7, the posterior at time is given by

(39) |

where ,

(40) | ||||

Comments

There are no comments yet.