# A Multi-Scan Labeled Random Finite Set Model for Multi-object State Estimation

State space models in which the system state is a finite set--called the multi-object state--have generated considerable interest in recent years. Smoothing for state space models provides better estimation performance than filtering by using the full posterior rather than the filtering density. In multi-object state estimation, the Bayes multi-object filtering recursion admits an analytic solution known as the Generalized Labeled Multi-Bernoulli (GLMB) filter. In this work, we extend the analytic GLMB recursion to propagate the multi-object posterior. We also propose an implementation of this so-called multi-scan GLMB posterior recursion using a similar approach to the GLMB filter implementation.

## Authors

• 9 publications
• 11 publications
• ### Online Visual Multi-Object Tracking via Labeled Random Finite Set Filtering

This paper proposes an online visual multi-object tracking algorithm usi...
11/18/2016 ∙ by Du Yong Kim, et al. ∙ 0

• ### Efficient Partial Snapshot Implementations

In this work, we propose the λ-scanner snapshot, a variation of the snap...
06/10/2020 ∙ by Nikolaos D. Kallimanis, et al. ∙ 0

• ### Fast Adaptive Bilateral Filtering

In the classical bilateral filter, a fixed Gaussian range kernel is used...
11/06/2018 ∙ by Ruturaj G. Gavaskar, et al. ∙ 4

• ### A unified approach for multi-object triangulation, tracking and camera calibration

Object triangulation, 3-D object tracking, feature correspondence, and c...
10/09/2014 ∙ by Jeremie Houssineau, et al. ∙ 0

• ### A Fourier State Space Model for Bayesian ODE Filters

Gaussian ODE filtering is a probabilistic numerical method to solve ordi...
07/17/2020 ∙ by Hans Kersting, et al. ∙ 0

• ### Multiple target tracking based on sets of trajectories

This paper proposes the set of target trajectories as the state variable...
05/26/2016 ∙ by Ángel F. García-Fernández, et al. ∙ 0

• ### Latent Parameter Estimation in Fusion Networks Using Separable Likelihoods

Multi-sensor state space models underpin fusion applications in networks...
08/02/2017 ∙ by Murat Uney, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

In Bayesian estimation for state-space models, smoothing yields significantly better estimates than filtering by using the history of the states rather than the most recent state [18], [6], [10]. Conditional on the observation history, filtering only considers the current state via the filtering density, whereas smoothing considers the sequence of states up to the current time via the posterior density. Numerical methods for computing the filtering and posterior densities have a long history and is still an active area of research, see for example [1], [25], [9], [10]. Recursive computation of the posterior density is also known as smoothing-while-filtering [6].

A generalisation of state-space models that has attracted substantial interest in recent years is Mahler’s Finite Set Statistics (FISST) framework for multi-object system [14], [15], [16, 17]

. Instead of a vector, the state of a multi-object system at each time, called the

multi-object state, is a finite set of vectors. Since its inception, a host of algorithms have been developed for multi-object state estimation [16, 17]. By incorporating labels (or identities), multi-object state estimation provides a state-space formulation of the multi-object tracking problem where the aim is to estimate the number of objects and their trajectories [5, 2, 16]. Numerically, this problem is far more complex than standard state estimation due to additional challenges such as false measurements, misdetection and data association uncertainty.

In multi-object state estimation, the labeled multi-object filtering recursion admits an analytic solution known as the Generalized Labeled Multi-Bernoulli (GLMB) filter [28], [30]. Moreover, this recursion can be implemented with linear complexity in the number of measurements and quadratic in the number of hypothesized objects [31]. Since the filtering density only considers information on the current multi-object state, earlier estimates cannot be updated with current data. Consequently, apart from poorer performance compared to smoothing, an important drawback in a multi-object context is track fragmentation, where terminated trajectories are picked up again as new evidence from the data emerges.

In this paper, we extend the GLMB filtering recursion to a (labeled) multi-object posterior recursion. Such posterior captures all information on the set of underlying trajectories and eliminates track fragmentation as well as improving general tracking performance. Specifically, by introducing the multi-scan GLMB model, an analytic multi-object posterior recursion is derived. Interestingly, the multi-scan GLMB recursion takes on an even simpler and more intuitive form than the GLMB recursion. In implementation, however, the multi-scan GLMB recursion is far more challenging. Like the GLMB filter, the multi-scan GLMB filter needs to be truncated, and as shown in this article, truncation by retaining components with highest weights minimizes the truncation error. Unlike the GLMB filter, finding the significant components of a multi-scan GLMB filter is an NP-hard multi-dimensional assignment problem. To solve this problem, we propose an extension of the Gibbs sampler for the 2-D assignment problem in [31] to higher dimensions. The resulting technique can be applied to compute the GLMB posterior off-line in one batch, or recursively as new observations arrive, thereby performing smoothing-while-filtering.

The remainder of this article is divided into 5 Sections. Section II summarizes relevant concepts in Bayesian multi-object state estimation and the GLMB filter. Section III introduces the multi-scan GLMB model and the multi-scan GLMB posterior recursion. Section IV presents an implementation of the multi-scan GLMB recursion using Gibbs sampling. Numerical studies are presented in Section V and conclusions are given in Section VI.

## Ii Background

Following the convention in [28], the list of variables is abbreviated as , and the inner product is denoted by . For a given set , denotes the indicator function of , and denotes the class of finite subsets of . For a finite set , its cardinality (or number of elements) is denoted by , and the product , for some function , is denoted by the multi-object exponential , with . In addition we use

 δY[X]≜{1, if X=Y0, otherwise

for a generalization of the Kroneker delta that takes arbitrary arguments.

### Ii-a Trajectories and Multi-object States

This subsection summarizes the representation of trajectories via labeled multi-object states.

At time , an existing object is described by a vector and a unique label , where is the time of birth, and is a unique index to distinguish objects born at the same time (see Fig. 1 in [30]). Let denote the label space for objects born at time , then the label space for all objects up to time (including those born prior to ) is given by the disjoint union (note that ). Hence, a labeled state at time is an element of .

A trajectory is a sequence of labeled states with a common label, at consecutive times [28], i.e. a trajectory with label and kinematic states is the sequence

 τ=[(xs,ℓ),(xs+1,ℓ),...,(xt,ℓ)]. (1)

A labeled multi-object state at time is a finite subset of with distinct labels. More concisely, let be the projection defined by , then has distinct labels if and only if the distinct label indicator equals one. The labeled states, at time , of a set of trajectories (with distinct labels) is the labeled multi-object state , where denotes the labeled state of trajectory at time .

Consider a sequence of labeled multi-object states in the interval . Let denote the element of with label . Then the trajectory in with label is the sequence of states with label :

 x(ℓ)s(ℓ):t(ℓ)=[(x(ℓ)s(ℓ),ℓ),...,(x(ℓ)t(ℓ),ℓ)], (2)

where

 s(ℓ)=max{j,ℓ[1,0]T} (3)

is the start time of label in the interval , and

 t(ℓ)=s(ℓ)+k∑i=s(ℓ)+11L(Xi)(ℓ) (4)

is the latest time in such that label still exists.

The multi-object state sequence can thus be equivalently represented by the set of all such trajectories, i.e.

 Xj:k≡{x(ℓ)s(ℓ):t(ℓ):ℓ∈⋃ki=jL(Xi)}. (5)

The left and right hand sides of (5) are simply different groupings of the labeled states on the interval . The multi-object state sequence groups the labeled states according to time while the set of trajectories groups according to labels (see also figure 1 of [30]).

For the rest of the article, single-object states are represented by lowercase letters (e.g. , ), while multi-object states are represented by uppercase letters (e.g. , ), symbols for labeled states and their distributions are bolded to distinguish them from unlabeled ones (e.g. , , , etc).

### Ii-B Bayes recursion

Following the Bayesian paradigm, each labeled multi-object state is modeled as a labeled random finite set (RFS) [28], characterized by the Finite Set Statistic (FISST) multi-object density [14], [27].

Given the observation history , all information on the set of objects (and their trajectories) is captured in the multi-object posterior density, . Note that the dependence on is omitted for notational compactness. Similar to standard Bayesian state estimation [6],[10], the (multi-object) posterior density can be propagated forward recursively by

 π0:k(X0:k)=gk(Zk|Xk)fk|k−1(Xk|Xk−1)π0:k−1(X0:k−1)hk(Zk|Z1:k−1), (6)

where is the multi-object likelihood function at time , is the multi-object transition density to time , and is the normalising constant, also known as the predictive likelihood. A valid ensures each surviving object keeps the same label and dead labels never reappear [28], so that the multi-object history represents a set of trajectories.

Markov Chain Monte Carlo (MCMC) approximations of the posterior have been proposed in [29] and [8] for detection and image measurements respectively. Combining MCMC with the generic multi-object particle filter [27] has also been suggested in [13].

A cheaper alternative is the multi-object filtering density, , which can be propagated by the multi-object Bayes filter [14, 16]

 πk(Xk)=gk(Zk|Xk)∫fk|k−1(Xk|Xk−1)πk−1(Xk−1)δXk−1hk(Zk|Z1:k−1). (7)

Under the standard multi-object system model, the filtering recursion (7) admits an analytic solution known as the Generalized labeled Multi-Bernoulli (GLMB) filter [28], [30]. For a general system model, the generic multi-object particle filter can be applied, see for example [19].

### Ii-C Multi-object System model

Given a multi-object state (at time ), each state

either survives with probability

and evolves to a new state with probability density or dies with probability . Further, for each in a (finite) birth label space at time , either a new object with state is born with probability and density , or unborn with probability . The multi-object state  (at time ) is the superposition of surviving states and new born states, and the multi-object transition density is given by equation (6) in [30]. An alternative form (using multi-scan exponential notation introduced in the next section) is given in subsection III-A.

Given a multi-object state , each is either detected with probability and generates a detection with likelihood or missed with probability . The multi-object observation is the superposition of the observations from detected objects and Poisson clutter with intensity . Assuming that, conditional on , detections are independent of each other and clutter, the multi-object likelihood function is given by [28], [30]

 gk(Zk|Xk)∝∑θk∈Θk1Θk(L(Xk))(θk)[ψ(θk∘L(⋅))k,Zk(⋅)]Xk

where

 ψ(j)k,{z1,...,zm}(x,ℓ)=⎧⎪⎨⎪⎩PD,k(x,ℓ)gD,k(zj|x,ℓ)κk(zj),if j>0QD,k(x,ℓ)if j=0,

denotes the set of positive 1-1 maps (i.e. those that never assign distinct arguments to the same positive value) from to :, and denotes the subset of with domain . The map assigns a detected label to measurement , while for an undetected label .

### Ii-D GLMB Filtering recursion

Given a state space and a discrete space , a generalized labeled multi-Bernoulli (GLMB) density on has the form [28]:

 π(X)=Δ(X)∑ξ∈Ξw(ξ)(L(X))[p(ξ)]X, (8)

where is a discrete index set, each is a probability density on , i.e., , and each is non-negative with . The GLMB density (8) can be interpreted as a mixture of (labeled) multi-object exponentials.

The GLMB family is closed under the Bayes multi-object filtering recursion (7) and an explicit expression relating the filtering density at time to that at time is given by (14) of [31]. This recursion can be expressed in the following form111This involves a straight forward change of notation, but for completeness, details are given in Appendix VII-B., which complies with, and facilitates the generalisation to the posterior recursion:

Given the GLMB filtering density

 πk−1(Xk−1)=Δ(Xk−1)∑ξw(ξ)k−1(L(Xk−1))[p(ξ)k−1]Xk−1, (9)

at time , GLMB filtering density at time is given by

 πk(Xk)∝Δ(Xk)∑ξ,θk,Ik−1w(ξ,θk)k(Ik−1)δD(θk)[L(Xk)][p(ξ,θk)k]Xk (10)

where , , , denotes the domain of ,

 w(ξ,θk)k(Ik−1) =1F(Bk⊎Ik−1)(D(θk))[w(ξ,θk)k|k−1]Bk⊎Ik−1w(ξ)k−1(Ik−1) (11) w(ξ,θk)k|k−1(ℓ) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩¯Λ(θk(ℓ))B,k(ℓ),ℓ∈D(θk)∩Bk,¯Λ(ξ,θk(ℓ))S,k|k−1(ℓ),ℓ∈D(θk)−Bk,QB,k(ℓ),ℓ∈Bk−D(θk),¯Q(ξ)S,k−1(ℓ),otherwise,, (12) p(ξ,θk)k(x,ℓ) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Λ(θk(ℓ))B,k(x,ℓ)¯Λ(θk(ℓ))B,k(ℓ),ℓ∈D(θk)∩Bk,⟨Λ(θk(ℓ))S,k|k−1(x|⋅,ℓ),p(ξ)k−1(⋅,ℓ)⟩¯Λ(ξ,θk(ℓ))S,k|k−1(ℓ),ℓ∈D(θk)−Bk,, (13) Λ(j)B,k(x,ℓ) =ψ(j)k,Zk(x,ℓ)fB,k(x,ℓ)PB,k(ℓ), (14) Λ(j)S,k|k−1(x|ς,ℓ) =ψ(j)k,Zk(x,ℓ)fS.k|k−1(x|ς,ℓ)PS,k−1(ς,ℓ), (15) ¯Q(ξ)S,k−1(ℓ) =⟨QS,k−1(⋅,ℓ),p(ξ)k−1(⋅,ℓ)⟩, (16) ¯Λ(j)B,k(ℓ) =⟨Λ(j)B,k(⋅,ℓ),1⟩, (17) ¯Λ(ξ,j)S.k|k−1(ℓ) =∫⟨Λ(j)S,k|k−1(x|⋅,ℓ),p(ξ)k−1(⋅,ℓ)⟩dx. (18)

The number of components of the GLMB filtering density grows super-exponentially in time. Truncation by discarding components with small weights minimizes the approximation error in the multi-object density [30]. This can be achieved by solving the ranked assignment problems using Murty’s algorithm or Gibbs sampling [31].

## Iii GLMB Posterior Recursion

In this section we extend the GLMB model to the multi-scan case, and subsequently derive an analytic recursion for the multi-scan GLMB posterior.

### Iii-a Multi-scan GLMB

Recall the equivalence between the multi-object state sequence and the set of trajectories in (5). For any function taking the trajectories to the non-negative reals we introduce the following so-called multi-scan exponential notation:

 [h]Xj:k≜[h]{x(ℓ)s(ℓ):t(ℓ):ℓ∈∪ki=jL(Xi)}=∏ℓ∈∪ki=jL(Xi)h(x(ℓ)s(ℓ):t(ℓ)) (19)

Note from (2) that a trajectory is completely characterised by and the kinematic states , hence we write and interchangeably.

The multi-scan exponential notation is quite suggestive since , and if the labels of , are disjoint then (see Appendix VII-A for additional properties). It also provides an intuitive expression for the multi-object transition density in [28].

###### Proposition 1

For the multi-object dynamic model described in subsection II-C, the multi-object transition density is given by

 fk|k−1(Xk|Xk−1)=Δ(Xk)1F(Bk⊎L(Xk−1))(L(Xk))QBk−L(Xk)B,k[ϕk−1:k]Xk−1:k (20)

where

 ϕk−1:k(x(ℓ)s(ℓ):t(ℓ);ℓ)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩PB,k(ℓ)fB,k(x(ℓ)k,ℓ)s(ℓ)=kPS,k−1(x(ℓ)k−1,ℓ)fS,k|k−1(x(ℓ)k|x(ℓ)k−1,ℓ)t(ℓ)=k>s(ℓ)QS,k−1(x(ℓ)k−1,ℓ),t(ℓ)=k−1 (21)

For completeness the proof is given in Appendix VII-C.

###### Definition 2

A multi-scan GLMB density on is defined by

 π(Xj:k)=Δ(Xj:k)∑ξ∈Ξw(ξ)(L(Xj:k))[p(ξ)]Xj:k (22)

where: is a discrete index set; ; ; each , , is non-negative with

 ∑ξ∑Ij:kw(ξ)(Ij:k)=1; (23)

and each , , is a probability density on , i.e.,

 ∫p(ξ)(xs(ℓ):t(ℓ);ℓ)dxs(ℓ):t(ℓ)=1. (24)

It is clear that the multi-scan GLMB (density) reduces to a GLMB (density) when .

Similar to the GLMB, the multi-scan GLMB (22) can be expressed in the so-called -form:

 π(Xj:k)=Δ(Xj:k)∑ξ∑Ij:kw(ξ)(Ij:k)δIj:k[L(Xj:k)][p(ξ)]Xj:k (25)

where , . Each term or component of a multi-scan GLMB consists of a weight and a multi-scan exponential with label history that matches . The weight can be interpreted as the probability of hypothesis , and for each , is the joint probability density of its kinematic states, given hypothesis .

###### Proposition 3

The integral of a function with respect to the multi-scan GLMB (22) is

 ∫f(L(Xj:k))π(Xj:k)δXj:k=∑ξ∑Ij:kf(Ij:k)w(ξ)(Ij:k) (26)

where , . See Appendix VII-D for proof.

By setting to 1 in the above proposition, the multi-scan GLMB integrates to 1, and hence, is a FISST density. Some useful statistics from the multi-scan GLMB follows from the above proposition for suitably defined functions of the labels.

###### Corollary 4

The cardinality distribution, i.e. distribution of the number of trajectories is given by

 Pr(∣∣∪ki=jL(Xi))∣∣=n)=∑ξ∑Ij:kδn[∣∣∪ki=jIi)∣∣]w(ξ)(Ij:k) (27)
###### Corollary 5

The joint probability of existence of a set of trajectories with labels is given by

 Pr(L exist)=∑ξ∑Ij:k1F(∪ki=jIi)(L)w(ξ)(Ij:k). (28)

As a special case, the probability of existence of trajectory with label is

 Pr(ℓ exists)=∑ξ∑Ij:k1∪ki=jIi(ℓ)w(ξ)(Ij:k). (29)
###### Corollary 6

The distribution of trajectory lengths is given by

 Pr(a trajectory has length m)=∑ξ∑Ij:kw(ξ)(Ij:k)∣∣∪ki=jIi∣∣∑ℓ∈∪ki=jIiδm[t(ℓ)−s(ℓ)+1], (30)

and the distribution of the length of trajectory with label is

 Pr(length(ℓ)=m)=∑ξ∑Ij:kδm[t(ℓ)−s(ℓ)+1]1∪ki=jIi({ℓ})w(ξ)(Ij:k). (31)

Similar to its single-scan counterpart, a number of estimators can be constructed for a multi-scan GLMB. The simplest would be to find the multi-scan GLMB component with the highest weight and compute the most probable or expected trajectory estimate from for each . Alternatively, instead of the most significant, we can use the most significant amongst components with the most probable cardinality (determined by maximizing the cardinality distribution (27)).

Another class of estimators, based on existence probabilities, can be constructed as follows. Find the set of labels with highest joint existence probability by maximizing (28). Then for each determine the most probable length by maximizing (31) and compute the trajectory density

 p(xs(ℓ):s(ℓ)+m∗−1;ℓ)∝∑ξ∑Ij:kδm∗[t(ℓ)−s(ℓ)+1]1∪ki=jIi({ℓ})w(ξ)(Ij:k)p(ξ)(xs(ℓ):s(ℓ)+m∗−1;ℓ), (32)

from which the most probable or expected trajectory estimate can be determined. Alternatively, instead of the label set with highest joint existence probability, we can use the label set of cardinality with highest joint existence probability. Another option is to find the labels with highest individual existence probabilities and use the same strategy for computing the trajectory estimates.

### Iii-B Multi-scan GLMB Posterior Recursion

Just as the GLMB is closed under the filtering recursion (7), the multi-scan GLMB is closed under the posterior recursion (6). Moreover, the multi-scan GLMB posterior recursion is, in essence, the GLMB filtering recursion without the marginalization of past labels and kinematic states. This is stated more concisely in the following Proposition (see Appendix VII-E proof).

###### Proposition 7

Under the standard multi-object system model, if the multi-object posterior at time is a multi-scan GLMB of the form

 π0:k−1(X0:k−1)=Δ(X0:k−1)∑ξw(ξ)0:k−1(L(X0:k−1))[p(ξ)0:k−1]X0:k−1, (33)

where , then the multi-object posterior at time is the multi-scan GLMB:

 π0:k(X0:k)∝Δ(X0:k)∑ξ,θkw(ξ,θk)0:k(L(X0:k−1))δD(θk)[L(Xk)][p(ξ,θk)0:k]X0:k (34)

where ,

 w(ξ,θk)0:k(I0:k−1) =1F(Bk⊎Ik−1)(D(θk))[w(ξ,θk)k|k−1]Bk⊎Ik−1w(ξ)0:k−1(I0:k−1) (35) w(ξ,θk)k|k−1(ℓ) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩¯Λ(θk(ℓ))B,k(ℓ),ℓ∈D(θk)∩Bk,¯Λ(ξ,θk(ℓ))S,k|k−1(ℓ),ℓ∈D(θk)−Bk,QB,k(ℓ),ℓ∈Bk−D(θk),¯Q(ξ)S,k−1(ℓ),otherwise,, (36) p(ξ,θk)0:k(x(ℓ)s(ℓ):t(ℓ);ℓ) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Λ(θk(ℓ))B,k(x(ℓ)k,ℓ)¯Λ(θk(ℓ))B,k(ℓ),s(ℓ)=kΛ(θk(ℓ))S,k|k−1(x(ℓ)k|x(ℓ)k−1,ℓ)p(ξ)0:k−1(x(ℓ)s(ℓ):k−1;ℓ)¯Λ(ξ,θk(ℓ))S,k|k−1(ℓ),t(ℓ)=k>s(ℓ)QS,k−1(x(ℓ)k−1,ℓ)p(ξ)0:k−1(x(ℓ)s(ℓ):k−1;ℓ)¯Q(ξ)S,k−1(ℓ),t(ℓ)=k−1p(ξ)0:k−1(x(ℓ)s(ℓ):t(ℓ);ℓ),t(ℓ)

Note that is indeed a probability density since

 ∫Λ(θk(ℓ))S,k(xk,ℓ|xk−1)p(ξ)s(ℓ):k−1(xs(ℓ):k−1;ℓ)dxs(ℓ):k =∫∫Λ(θk(ℓ))S,k(xk,ℓ|xk−1)p(ξ)k−1(xk−1;ℓ)dxk−1dxk =¯Λ(ξ,θk)S,k,(ℓ) ∫QS,k−1(xk−1,ℓ)p(ξ)0:k−1(xs(ℓ):k−1;ℓ)dxs(ℓ):k−1 =∫QS,k−1(xk−1,ℓ)p(ξ)k−1(xk−1;ℓ)dxk−1 =¯Q(ξ)S,k−1(ℓ)

The multi-scan GLMB posterior recursion (33)-(34) bears remarkable resemblance to the GLMB filtering recursion (9)-(10). Indeed, the weight increments for multi-scan GLMB and GLMB components are identical. Arguably, the multi-scan GLMB recursion is more intuitive because it does not involve marginalization over previous label sets nor past states of the trajectories.

The multi-scan GLMB recursion initiates trajectories for new labels, update trajectories for surviving labels, terminates trajectories for disappearing labels, and stores trajectories that disappeared earlier. Noting that is equivalent to , initiation of trajectories for new labels is identical to that of the GLMB filter. Noting is equivalent to , the update of trajectories for surviving labels is the same as the GLMB filter, but without marginalization of past kinematic states. On the other hand, termination/storing of trajectories for disappearing/disappeared labels are not needed in the GLMB filter.

### Iii-C Cannonical Multi-scan GLMB Posterior

Without summing over the labels nor integrating the probability densities of the trajectories, the canonical expression for the multi-scan GLMB posterior takes on a rather compact form. To accomplish this, we represent each by an extended association map : defined by

 γk(ℓ)={θk(ℓ),if ℓ∈D(θk)−1,otherwise. (38)

Let denote the set of positive 1-1 maps from to :, and (with a slight abuse of notation) denote the live labels of , i.e. the domain , by

 L(γk)≜{ℓ∈Lk:γk(ℓ)≥0}.

Then for any , we can recover by for each . It is clear that there is a bijection between and , and hence can be completely represented by .

Starting with an empty initial posterior , by iteratively applying Proposition 7, the posterior at time is given by

 π0:k(X0:k)∝Δ(X0:k)∑γ1:kw(γ0:k)0:kδL(γ0:k)[L(X0:k)][p(γ0:k)0:k]X0:k (39)

where ,

 w(γ0:k)0:k =k∏i=11Γi(γi)1F(Bi⊎L(γi−1))(L(γi))[ω(γ0:i(⋅))i|i−1]Bi⊎L(γi−1) (40) ω(γ0:i(ℓ))i|i−1