An equalised global graphical model-based approach for multi-camera object tracking

02/12/2015
by   Weihua Chen, et al.
0

Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with an improved similarity metric. Our method treats the similarities of single camera tracking and inter-camera tracking differently and obtains the optimization in a global graph model. The results show that our method can work better even in the condition of poor single camera object tracking.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 5

page 6

page 7

page 9

page 13

11/23/2021

LMGP: Lifted Multicut Meets Geometry Projections for Multi-Camera Multi-Object Tracking

Multi-Camera Multi-Object Tracking is currently drawing attention in the...
09/20/2017

Multi-camera Multi-Object Tracking

In this paper, we propose a pipeline for multi-target visual tracking un...
11/22/2021

Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras

Increasing demand for meat products combined with farm labor shortages h...
05/31/2021

Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality Collaboration

In this paper, we focus on the multi-object tracking (MOT) problem of au...
03/14/2016

Extended Object Tracking: Introduction, Overview and Applications

This article provides an elaborate overview of current research in exten...
10/09/2014

A unified approach for multi-object triangulation, tracking and camera calibration

Object triangulation, 3-D object tracking, feature correspondence, and c...
01/15/2020

Pose-Assisted Multi-Camera Collaboration for Active Object Tracking

Active Object Tracking (AOT) is crucial to many visionbased applications...

Code Repositories

EGTracker

Code for "An Equalised Global Graphical Model-Based Approach for Multi-Camera Object tracking"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Tracking objects of interest is an important and challenging problem in intelligent visual surveillance systems [1]. Since the visual surveillance systems provide huge amount of video streams, it is desirable that objects of interest can be automatically tracked by algorithms instead of human. Visual object tracking [2]

is a long-standing problem in computer vision, and there are a great amount of efforts made in visual object tracking within single cameras 

[3, 4, 5]. In intelligent visual surveillance systems [6, 7], due to the finite camera field of view, it is difficult to observe the complete trajectory of objects of interest in wide areas with only one camera. Hence, it is desired to enable the intelligent visual surveillance system to track the objects of interest within multiple cameras [8]. In addition, for practical considerations, the intelligent visual surveillance system usually holds the cameras installed with no overlapping areas. Thus, the intelligent visual surveillance system should be able to track objects of interest across multiple non-overlapping cameras. In this paper, we focus on addressing the problem of tracking objects of interest across multiple non-overlapping cameras.

As shown in Fig. 1 (Solution A), previous visual object tracking approaches tackle the problem in two different steps: single camera object tracking (SCT) [9, 10, 11] and inter-camera object tracking (ICT) [12, 13, 14]. SCT approaches [9, 10, 11] attempt to compute the trajectories of multiple objects from a single camera view, while ICT approaches [12, 13, 14] aim to find the correspondences among those trajectories across multiple camera views. These ICT approaches often use the trajectories obtained from SCT to achieve their data association, hence the overall tracking system is brittle and the overall performance depends on the results of the single camera object tracking module. For challenging scene videos, existing SCT approaches [15, 16, 17] are also frangible since the results often contain fragments and false positives. The direct disturbance of these false positives and fragments bring problems into ICT module, such as wrong matching problem, i. e. two targets in Camera 2 are matched to different tracklets of a same target in Camera 1 (see Fig. 2 (a)), and tracklet missing problem, i. e. some tracklets of a target are missing during inter-camera tracking (see Fig. 2 (b)). These problems are inevitable as long as the multi-camera object tracking is solved in two steps. We address these problems by integrating the two separate modules and jointly optimising them.

We develop a global multi-camera object tracking approach. It integrates two steps together via an equalized global graph model to avoid these “inevitable” problems and aims to improve the overall performance of multi-camera object tracking.

Considering two different steps, we evaluate the overall performance from the following two criteria:

  • Single camera object tracking: measuring how well the completed pedestrian trajectories in a single camera can be used to rebuild their exact historical paths in each scene.

  • Inter-camera object tracking: evaluating how well the inter-camera matching help to locate the pedestrians in a wide area.

As shown in Fig. 1 (Solution A), SCT and ICT share a similar data association framework: a graph modeling with an optimisation solution. In the single camera object tracking module, the data association inputs are the initial observations, such as detections or tracklets, and the outputs are the integrated trajectories in each single camera (known as mid-term trajectories). These mid-term trajectories are then used as inputs to achieve the data association in inter-camera object tracking, and the outputs of the ICT approaches are the final integrated trajectories in multi-cameras (known as final trajectories). To integrate these two data associations, the straightforward idea is to establish a new data association which takes initial observations as inputs and outputs the final trajectories directly. However, a new problem arises, i. e. how to measure the similarity between two observations in the new graph. Some similarities are from the observations which belong to the same camera, and others are from those belong to different cameras. If under the same similarity metric, the average similarity score between observations in different cameras would be commonly lower111The higher similarity score indicates a higher likelihood of the link for two observations. than that from observations in the same camera, because the appearance information and the spatio-temporal information of objects are less reliable in ICT than those in SCT due to many factors (camera settings, viewpoints and lighting conditions). In this case, the optimisation process makes the graph give priority to linking the observations following the edges in the same camera instead of those across cameras, which would cause a failed optimized result for the whole multi-camera object tracking. To solve this problem, we have to handle two questions: how to distinguish the similarities in a same camera from those in different cameras, and how to balance them in the new graph? In this paper, we improve the similarity metric, make a difference between similarities of SCT and ICT, and equalize them in a global graph. A minimum uncertain gap [18] is adopted to establish the improved similarity metric. Thanks to this, the similarity scores in both SCT and ICT are equalized in the proposed global graph model.

The contributions of this paper222A preliminary version of this paper appeared in Chen et al.  [19] and the source code is available in the link (https://github.com/cwhgn/EGTracker). are as follows.

  1. a global graph model for multi-camera object tracking is presented which integrates SCT and ICT steps together to avoid the “inevitable” problems;

  2. an improved similarity metric is proposed to equalize the different similarities in two steps and unify them in one graph;

  3. the proposed approach is experimented on a comprehensive evaluation criterion which clearly shows that our method is more effective than the traditional two-step multi-camera visual tracking framework.

Fig. 1: Illustration of three types of multi-camera visual object tracking solution.

Ii Related Work

Using a graph model is an efficient and effective way to solve the data association problem in multi-camera visual object tracking. First, a graph modeling is used to form a solvable graph model with input observations (detections, tracklets, trajectories or pairs). It includes nodes, edges and weights. Then an optimisation solution is brought in to solve the graph and obtains optimal or suboptimal solutions. The difference is that single camera object tracking (SCT) emphasizes particularly on the graph and the optimisation solution, i. e. how to build a more efficient or more discriminative graph. While inter-camera object tracking (ICT) focuses on nodes, edges and weights, which prefers getting a more effective feature representation. The ICT has more complex and more sophisticated representations or similarity metrics (i. e. a transition matrix), but with a simpler graph model. The proposed approach takes advantages of both SCT and ICT. The proposed similarity metric is extended from a classical inter-camera tracking method  [20] and the global graph model takes advantage of a state-of-the-art SCT approach [21].

This section introduces related approaches for each part of SCT, ICT and MCT. Section 2.1 reviews the single camera multi-object tracking. Section 2.2 discusses the inter-camera object tracking with a brief introduction of object re-identification. Section 2.3 shows some other multi-camera object tracking approaches that take both SCT and ICT into account.

Ii-a Single Camera Object Tracking (SCT)

In single camera multi-objects tracking, the prediction of the spatio-temporal information of objects is more reliable and the appearance of objects does not have many variations during tracking. This makes the SCT task less challenging than the ICT task. i. e. for some less challenging videos, a simple appearance representation (e.g. color histogram [22, 23, 24]) works well. The graph model is often used to solve different problems, such as occlusion [25, 26], crowd [24, 27] and interference of appearance similarity [28, 29]. However, for challenging videos, these approaches lead to frequent id-switch errors and trajectory fragments.

Existing approaches in SCT usually follow a data association-based tracking framework, which link short tracklets [19, 23, 30] or detection responses [31, 32, 33] into trajectories by a global optimization based on various kinds of features, such as motion (position, velocity) and appearance (color, shape). The improvements always develop from two aspects: the graph model and the optimization solution. Some researchers focus on developing a new graph model for their tracklets or detections and aim to solve a specific problem. In Possegger et al.  [26], a geodesic method is adopted to handle the occlusion problem. Dicle et al.  [28] use motion dynamics to solve generalized linear assignments when targets with similar appearances exist. Other works in SCT focus on the improvement of the optimization solution framework, such as continuous energy minimization [34]

, linear programming 

[35], CRF [36] and the mixed integer program [37]. Zhang et al.  [21] propose a maximum a posteriori (MAP) model to solve the data association of the multi-object tracking, while Yang et al.  [36] utilize an online CRF approach to handle the optimization with the benefit of distinguishing spatially close targets with similar appearances. These approaches can partly yield id-switches and trajectory fragments, but the separated optimisation makes them suffer from leaving many fragments and false positives to ICT step.

(a) Wrong matching (b) Tracklet missing
Fig. 2: Illustration for the two matching problems. Blue and red lines indicates two targets and arrows show the best matching. Target B is matched to tracklet A2 wrongly in (a). Tracklet A1 is missing in (b).

Ii-B Inter-camera Object Tracking (ICT)

Inter-camera tracking is more challenging than SCT because of its greater dramatic changes in appearance caused by many factors (camera settings, viewpoints and lighting conditions) and less reliable spatio-temporal information in different camera views. As a result, how to learn a discriminative and invariant feature representation and a suitable similarity metric are the main problems in ICT.

Most ICT works solve these problems from multi-camera calibration [38, 39, 40] and feature cues [41, 42, 43, 44, 45]. For multi-camera calibration, as an immobile information, the approaches in this aspect always project the multiple scenes into a 3D coordinate system, and achieve the matching by using projected position information. Hu et al.  [39] adopt a principal axis-based correspondence to achieve the calibration. For feature cues, most approaches utilize improved appearance or spatio-temporal information to achieve the matching. Kuo et al.  [42] apply a multi-instance learning approach to learn an appearance affinity model, while Matei et al. [43] integrate appearance and spatio-temporal likelihoods within a multi-hypothesis framework.

From the perspective of the graph modeling, a K-camera ICT data association can be treated as a K-partite graph matching problem. It is difficult to get the optimal solution, but there’re many approaches to get the suboptimal solutions, e.g. the weighted bipartite graph [46], the Hungarian algorithm [47] and the binary integer program [48]. The K-partite idea holds an assumption that each camera has had a perfect tracking result which should not be changed any more. In practice, the SCT result is not ideal and the assumption is broken. In this case, the SCT result should be modifiable and the data association is more like a global optimization problem than the K-partite graph matching problem.

At the end of introducing ICT, it is worth mentioning that object re-identification (Re-ID) is an important part in ICT. When the topology of the camera network is not available or the scenes are not overlapped, the spatio-temporal information is invalid. In this case, the appearance cue is the only information can be used for matching. Studying object re-identification separately helps to better understand the capability of object matching by using visual features alone. Most object re-identification improvements mainly focus on some certain appearance of objects, such as color [20, 49], shape [50, 51] and texture [52]. Recently, Li et al.  [53] successfully apply CNN on Re-ID to extract an effective feature representation. However the highest identification rate is still below 0.3 under benchmarks and the approaches are also not practical.

As we said, the ICT approaches have a common assumption that the single camera object tracking results are perfectly done and the trajectories in single cameras are all true positive and integrated completely. But until now, they are difficult to be achieved.

Ii-C Multi-camera Object Tracking (MCT)

A good MCT is the ultimate goal for any researcher in tracking. Most MCT methods follow the two-step framework, a SCT algorithm plus an ICT algorithm. In the Multi-Camera Object Tracking Challenge [54] in ECCV 2014 visual surveillance and re-identification workshop, methods of most participating teams are two-step approaches. The winner USC-Vision team uses a state-of-the-art SCT method [32] and a state-of-the-art ICT method [41].

Besides two-step approaches, there’re some multi-camera object tracking approaches [55, 56, 57, 58] concentrating on integrating the processes of SCT and ICT into one global graph as this paper does. They mainly follow a tracking-by-detection paradigm and form a global association graph (see Fig. 1 (Solution C)). Yu et al.  [56]

propose a nonnegative discretization solution for data association and identify people across different cameras by face recognition. While for real scenes with objects in a distant view, faces are too small to be recognized. Hofmann 

et al.  [58] use a global min-cost flow graph and connect the different-view detections through their overlapping locations in a world coordinate space, which is not suitable for the non-overlapping camera problem.

In this paper, the proposed method uses tracklet observations as the inputs instead of object detections, which are more reliable for matching. We consider the multi-camera object tracking as a global tracklet association under a panoramic view (see Fig. 1 (Solution B)). And the similarities of different tracklets in the global tracklet association are treated differently according to the cameras they belonging to. This framework provides a new solution for multi-camera object tracking when the SCT performance is not good enough for the further ICT process. Its local performance in a specific camera view may be as fragmentary as that of the traditional SCT methods, even the inter-camera information may provide some useful feedbacks for each specific camera. But it overcomes the new problems emerging in ICT when SCT is not good and offers a better ICT performance. In practice, a better ICT has stronger practical significance than SCT. For a video surveillance system, it’s more important to locate the objects in the whole wide area than a single scene.

Iii Global Graph Model

Fig. 3: Illustration for the min-cost flow network. An example for the min-cost flow network with 3 timesteps and 6 tracklets. The number of and are 14, 21 and 21.

Our goal is to predict the trajectories by using the given series of observed videos. The proposed approach focuses on optimising single camera tracking and inter-camera tracking in one global data association process. The data association is modeled as a global maximum a posteriori (MAP) problem which is inspired by the same MAP formulation from Zhang et al.  [21]. The difference is that the input in the proposed solution is tracklets rather than object detections. And the association aims to solve the wrong matching and the tracklet missing problems in ICT, while Zhang et al.  [21] apply it on SCT. We outline the variable definitions in Table I.

In our approach, a single trajectory hypothesis is defined as an ordered list of target tracklets, i.e. where . The association trajectory hypothesis is defined as a set of single trajectory hypothesises, i.e.

. The objective of the data association is to maximize the posteriori probability of

given the tracklet set under the non-overlapping constraints [21]:

(1)

is the likelihood of tracklet . The prior

is modeled as a Markov chain containing transition probabilities

of all tracklets in [58].

The transition probability is computed by using probabilities of the appearance feature and the motion feature .

(2)

where and are the weights of two features.

The MAP association model can be solved by a min-cost flow network [19]. The min-cost flow graph is formulated as , where stands for nodes, edges and weights respectively and the weight means the cost of linking the edge. In the graph , there are two nodes and defined for each tracklet . The observation edge from node to indicates the likelihood of tracklet . The corresponding observation weight is set to the negative logarithm of the likelihood .

The possible linking relationship between any two tracklets is expressed as a transition edge from node to node , the transition weight is the negative logarithm of the transition probability , as shown as follows,

(3)

The transition weight can also be decomposed into probabilities in continuity of appearance and motion,

(4)

In addition to these nodes and edges, there are two extra nodes . They are virtual source and sink for the min-cost flow graph. The enter/exit edges and are also added in to represent the start tracklet and the end tracklet . The enter/exit weights of these tracklets are both set to 0 in this paper, because every tracklet could be equally a start or end with no cost.

In summary, the number of nodes () is (), and the numbers of edges and weights are smaller than the numbers of full connection graph (). is the total number of tracklets in all cameras. As shown in Fig. 3, the graph is solved by the min-cost flow, and the optimal solution is the maximum of the posteriori probability of with the minimum cost.

In the rest of this section, we introduce every part of the min-cost flow graph, especially for the weights .

A single input tracklet consisted of several attributes,
.
The set of all input tracklets, .
A single trajectory hypothesis consisted of an ordered list of
target tracklets, .
The output of the aglorithm which is the optimal set of trajectory
hypothesis.
The min-cost flow graph, .
The set of nodes in the graph,
.
The set of edges in the graph,
.
The set of weights in the graph,
.
The MCSHR of tracklet in the th frame.
The incremental MCSHR for the whole tracklet .
The similarity between any MCSHR pair and .
The best periodic time for tracklet .
TABLE I: Notations of Equalized Global Graph Model

Iii-a Nodes

In the proposed approach, the tracklets extracted by a single-object tracking method are treated as input observations instead of detections. In other words, these tracklets are used to produce nodes in the global graph model. One of the reasons is that they have more information (like motion) than detections which only contain appearance information. With more information, they can be considered as more credible nodes and the similarities of them are more reliable. What’s more, the number of the tracklets is much smaller than that of detections. It’s a good way to speed up the computing time of the graph optimization, which is also very important for practical usages. In this paper, the deformable part-based model (DPM) detector [59] and an AIF tracker [60] are first used to get all the tracklets from each camera. After obtaining detections by the DPM detector, we use the AIF tracker to track every target and get their tracklets. During the target tracking by the AIF tracker, a confidence  [60] is calculated to evaluate the accuracy of a tracking result in frame . If the confidence score is lower than the threshold , i. e. , the tracker is considered to be lost. Then all confidence values of the target in previous frames are recorded and the average value is computed as the likelihood of tracklet ,

(5)

where and are the start and end frames of tracklet .

So all the tracklets from all cameras are obtained, , where each tracklet consists of position, likelihood, camera view, time stamp and appearance information respectively. The nodes can be expressed as:

(6)

Iii-B Edges

Edges are also an important part for the graph model. All the observation edges and enter/exit edges are reserved in the min-cost flow graph. However, for the transition edges, only a part of it is retained because that not all the edges are meaningful. Three rules are built for selecting transition edges in our graph.

Firstly, for edge , the start frame of the tracklet must be after the end frame of the tracklet without any overlapping frame. This rule ensures the uniqueness of objects in every frame and keeps the edges directed. Secondly, the two tracklets and should come from the same camera or two cameras with an existing topological connection, which ensures the link of two tracklets possible from a panoramic view. Thirdly, a waiting time threshold is brought in to limit the link of two tracklets. If the time interval between two tracklets is long enough, longer than the threshold , the likelihood of this link is close to zero. As a result, the edges that meet all requirements are selected and reserved,

(7)

where means the camera views of and have an existing topological connection.

For all these selected edges , the capacity is set to 0 or 1, because every target should be at one and only one place in the same time. If the capacity is 1 in the optimal solution, which means this link exists and the two tracklets of this link belong to the same target.

Iii-C Weights

Weights are an essential attribution for links and used to represent relationships between nodes. In this paper, we import the similarities among tracklets as weights to indicate the cost of building links. As it mentioned above, the weights are consisted of three parts, the same as edges:

(8)

The observation weights can be obtained according to Eq. 5. And the enter/exit weights are all set to 0 as mentioned above. In the transition weights, the appearance similarity and the motion similarity are used to form the weights. In the following we introduce them respectively.

(9)

Fig. 4: Illustration of computing the periodic time for a tracklet. An example for a tracklet with the length of 9 frames. The Avg Sim column shows the validity of every possible periodic time . The maximum in Avg Sim column indicates the best periodic time for this tracklet.

Iii-C1 Appearance Similarity

As shown in Section II, both SCT and ICT have their own representations and similarity metrics, while those in ICT methods are more sophisticated than those in SCT ones. In order to build an equalized metric, the proposed approach adopts an ICT representation. But it doesn’t use any learning process which strongly increases the computing time. This representation is called Piecewise Major Color Spectrum Histogram Representation (PMCSHR) [19]. It’s an improved version of Major Color Spectrum Histogram Representation (MCSHR) [20]

with some periodicity information that is specific to pedestrian. MCSHR obtains the major colors of a target based on an online k-means clustering algorithm. The original way of computing the MCSHR of a tracklet is to integrate histograms in all frames together.

(10)

where is the MCSHR of tracklet in the th frame and is the incremental MCSHR [20] for the whole tracklet . is the length of tracklet .

As non-rigid targets, pedestrians are challenging objects to be tracked even with the help of the MCSHR. However, we can make some assumptions to help tracking. We assume that pedestrians are always walking at a constant speed in scenes, and the goal of our approach is to find the periodic time to segment the tracklets.

All MCSHRs of the tracklet are firstly obtained, and then the similarity between any pair and is computed. The intuition is to compute all the possible periodic times and find the best one. For a certain periodic time , the similarity between and its next periodic is collected for every frame , and the average similarity is considered as the value which determines the validity of this periodic time . As shown in Fig. 4, the periodic time with a highest validity is considered as our best periodic time for tracklet .

(11)

The set is used to limit the possible range of , and is set to 15. If is too small, the nearby frames will have a strong similarity which causes Eq. 11 to a false maximum. After calculation, is the best periodic time for tracklet . Then the tracklet can be evenly segmented into pieces with the length (except the end part). For each piece, the incremental MCSHR is computed. The PMCSHR of tracklet is represented by , and is the number of pieces that the tracklet is segmented into.

Then every similarity between each two pieces from tracklets and are computed, and the average similarity is considered as the appearance similarity between two tracklets.

(12)

where is the similarity metric for two tracklets’ incremental MCSHRs.

Fig. 5: Illustration of the calculation of the relative distance.
(a) (b)
Fig. 6: Illustration of the enter/exit areas for the multi-camera visual object tracking. The enter/exit areas for links from Cam 1 to Cam 2 are in column (a), while those from Cam 2 back to Cam 1 are in column (b). The blue and yellow areas indicates the exit and enter areas respectively, and the red points represent the disappearing points.

Iii-C2 Motion Similarity

For a general method that is available in both overlapping and non-overlapping views, it’s hard to always build an exact 3D coordinate system to project all scenes together. Hence, in this paper a relative distance between two tracklets is adopted to measure the motion similarity. For two tracklets and , it’s easy to get their interval time by a simple subtraction. If the two tracklets are likely to belong to one target, the interval time must be a positive number.

(13)

where is the start time of tracklet and is the end time of tracklet .

With the interval time , the position and the velocity of tracklet in the end time, we can predict the position where the tracklet is behind time. The new position can be calculated as below:

(14)

For tracklet , we can conduct the same thing and get its predicted position time ago.

(15)

As people always walk along a smooth path in real scene, we can assume that if the two tracklets belong to a same person, the corresponding predicted positions must be close to each other. In other words, and should be close enough to and respectively. Therefore, the distances between predicted positions and original positions are used to represent the motion similarity between two tracklets (seen in Fig. 5).

So the motion similarity in the single camera is computed as below:

(16)
(a) (b) (c)
Fig. 7: Illustration of the computing method for the minimum relative distance across cameras. In column (b), and are in exit and enter areas respectively, which indicate that both of and are set to 0. The red lines in column (c) are and .

As shown in Eq. 16, the relative distance is only valid for two tracklets from the same camera. If tracklets are from different cameras, the interval time is partly invalid. Becasue in inter-camera cases, the pathes between cameras are hard to measure which renders the interval time useless for predicting positions. In this case, the relative distance mostly tends to be a huge wrong number. To handle this problem, a minimum relative distance is applied to compute the similarity across cameras, which is comparable with Eq. 16.

Enter/exit areas are commonly used in some uncalibrated camera systems to help to re-local exact positions of targets. Hence, we labeled enter/exit areas of each camera view with the help of topology information (seen Fig. 6).

For a person, if she disappeared from an exit area, we would assume that she could be found in the enter area of the possible corresponding camera (seen in Fig. 7 (a)). If she disappeared from an area near a exit area, she could re-appear in the possible corresponding enter area with a high probability. Under this assumption, we manually set a disappearing point for each area to connect cameras. Then a minimum relative distance to the disappearing point during the whole interval time is adopted to measure the motion similarity across cameras instead of the original relative distance , seen in Fig. 7 (b) and (c).

(17)
(18)
(19)

where and are the positions of the disappearing points for the enter area and the exit area in camera respectively.

Another benefit of the minimum relative distance is that it is measured in each camera which can be compared with the relative distance. With its help, the motion similarity metric can be extend from a single camera to a multi-camera system and can be considered as well equalized in the global graph.

The final equalized motion similarity metric is:

(20)

where is set to 0.01 in the experiments.

Iv Equalized Graph Model

During tracking objects in a single camera, we assume that observations are obtained under the same circumstance, like illumination and angle of view. Hence the targets would have a strong invariance in their appearance representations which can further be used for tracking. During inter-camera object tracking, this invariance is weaker due to the changes in different circumstances. When we establish the graph with nodes and edges, this phenomenon would cause the inter-camera similarities being much lower than the similarities in single camera. If we use Eq. 12 to compute the appearance similarities and provide no alignment or equalization for two similarity distributions, it would result in that the optimization process links the edges in the single camera preferentially all the time and ignores the inter-camera links as long as there is a edge with a higher similarity in the same camera. It’s hard to get an accurate alignment for two similarity distributions, and the proposed approach offers a suitable alignment which can be considered as a compensation for the inter-camera similarities. Our purpose is to equalize the difference between two similarity distributions and at the same time manage to keep the distribution of the inter-camera similarity not affected. So our equalization is mainly processed on the distribution of the single camera similarity and make it close to the inter-camera similarity distribution.

(21)

where and are the compensation factors, the similarity between tracklets and is obtained by Eq. 12.

The factor is used to improve the average level of the single camera similarity distribution and the factor is adopted to control the amplitude of variation. They are computed from two similarity distributions.

(22)

where and

are the mean and variance of the single camera similarity distribution. These should be computed by all the single camera edges. And

and are of the inter-camera similarity distribution and should be got from all the inter-camera edges.

However, not all the similarities of edges are reliable and suitable to compute the mean and variance. Some have a large proportion of noises and should be excluded as outliers. In this paper, a minimum uncertain gap (MUG) 

[18] is brought in to help to filtrate edges used for computing the mean and variance. The MUG is used to measure the uncertainties of the likelihoods between tracklets. The tracklet link with a small MUG can be considered as a more reliable link, because its similarity is more stable and more believable. As a result, the MUG is treated as a confidence factor for edges.

(23)

Therefore, with the help of MUG’s filtration, the mean and variance are computed as follows:

(24)
(25)

where is a confidence threshold, MEAN() and VAR() are the mean and variance operations respectively.

And the final equalized appearance similarity metric would become:

(26)

V Experiment Results

In this section, the proposed approach is evaluated based on the following aspects. First, the global graph model is compared with the traditional two-step framework, where we use the same feature representation for fairness. Second, a performance comparison between the equalized graph and the non-equalized one is provided to prove the effectiveness of the equalization process with the improved similarity metric. Third, the proposed approach is compared with some state-of-the-art Multi-Camera Tracking (MCT) methods. However, as there’re no benchmark for MCT, we introduce a dataset and a comprehensive evaluation criterion first, which can be developed as a benchmark in further works. The dataset is specialized for multi-camera pedestrian tracking in non-overlapping cameras, called NLPRMCT dataset. The details of the dataset are presented in Section V-A. The proposed evaluation criteria for MCT is introduced in Section V-B.

Fig. 8: Illustration of the topological relationship during tracking. The topological relationships for every dataset are shown in the right column, and the blue polygons stand for enter/exit areas used in our experiments for Dataset 1-4.
Dataset1 Dataset2 Dataset3 Dataset4
71853 334 88419 408 18187 152 42615 256
TABLE II: The single-camera and inter-camera ground truthes for all four sub-datasets.

V-a Datasets

For a comprehensive performance evaluation, it is crucial to develop a representative dataset. There are several datasets for visual tracking in the surveillance scenarios, such as PETS [61], CAVIAR [62], TUD [63] and i-LIDS [64] databases. However, most of them are designed for multi-object tracking in a single camera and are not suitable for inter-camera object tracking. PETS is under a simulation environment with overlapping cameras, not in real scene, while i-LIDS aims to serve multi-camera object tracking indoor and the ground truthes are not for free so far. For these reasons, a new pedestrian dataset is constructed in this paper for multi-camera object tracking to facilitate the tracking evaluation.

The NLPRMCT dataset333http://mct.idealtest.org/Datasets.html consists of four sub-datasets. Each sub-dataset includes 3-5 cameras with non-overlapping scenes and has a different situation according to the number of people (ranging from 14 to 255) and the level of illumination changes and occlusions. The collected videos contain both real scenes and simulation environments. We also list the topological connection matrixes for pedestrian walking areas. All the videos are nearly 20 minutes (except Dataset 3) with a rate of 25 fps and are recorded under non-overlapping views during daily time, which make the dataset a good representation of different situations in normal life. The connection relationships between scenes are shown in Fig. 8, where the enter/exit areas for this paper are also marked.

Fig. 9: Performance evaluation of the proposed approach under different parameter settings. The x-coordinate for all the figures is the confidence threshold of the AIF tracker, and the number in bracket is the corresponding number of tracklets. With the increase of , the tracklet number grows and more tracklet fragments are produced. The y-coordinates in three rows are the SCT mismatch number, the ICT mismatch number and the MCTA score respectively. The performance score under is shown in the legend. The method of global graph is the proposed approach. The two-step with MAP is Zhang’s work [21] which uses MAP to achieve the SCT process. The two-step with MAP and Hungary in the last two row stand for the approaches that solve the ICT problem with MAP and Hungary algorithm [47].

V-B Evaluation Criteria

As we know, both SCT and ICT have their own evaluation criteria. Most SCT trackers usually use the multi-object tracking accuracy (MOTA) and ID switch [65] as their evaluation criteria, while some SCT papers prefer other terms [11, 24, 42]. In ICT, the ID switch is also a necessary term.

There are two criteria mentioned in Section I which are important to a multi-camera multi-object tracking system. The SCT module and the ICT module correspond to the two criteria respectively. As these two criteria are equally crucial for multi-camera object tracking performance, they should be considered equally important in the final performance measurement.

Nevertheless, in today’s multi-camera object tracking, there is rarely a widely accepted performance measurement that takes these two criteria into account. The common criterion researchers used for multi-camera object tracking is an extension of MOTA. It adds the ID switches in SCT and in ICT together, which ignores the different incidence densities of the ID switches in SCT and ICT. In most video scenes, i. e. Table II

, the ground truthes used for frame matching in SCT are much more than those in ICT. It leads to trackers caring more about the trajectories in single camera rather than the inter-camera matching. In this paper, we treat them separately and provide a new evaluation criterion to measure the performance of multi-camera object tracking. Our criterion takes both of SCT and ICT criteria into account and uniform them into one evaluation metric. The metric is called multi-camera object tracking accuracy (MCTA):

(27)

It’s also modified based on MOTA [65] and can be applied on multi-camera object tracking. It avoids the disadvantage of MOTA that can be negative due to the false positives. The MCTA ranges for 0 to 1. The metric contains three parts: detection ability, SCT ability and ICT ability, which are corresponding to the three brackets in Eq. 27. The and are integrated by F1-score to measure the detection power and the occlusion handling ability. In this paper, the experiments focus on testing the SCT and the ICT abilities of the proposed approach, so for the first two experiments, we use the ground truthes of object detections as the inputs instead of running a real detector, which leads to and . In the last experiment, a DPM [59] detector is used to get the detection results.

(28)

where , , and are the number of false positives, hypothesises, misses and ground truthes respectively in time .

(29)

For SCT and ICT ability parts, we measure the abilities via the number of mismatches (ID-switches). We split the number of mismatches in MOTA [65] into and . represents the number of mismatches happened in a single camera and is for those inter-camera mismatches. The and are the matching numbers of frames in ground truthes. contains the matchings, the two frames of which are from the same camera, and means the number of those inter-camera matchings. It is worth noting that both and are among the truth positive detection results. For a new target, it’s counted as an inter-camera ground truth by default in our criterion.

NonA EqlA M EqlA+M
Dataset1 71 76 53 66
123 88 101 49
0.6311 0.7357 0.6971 0.8525
Dataset2 83 109 67 93
201 164 126 107
0.5069 0.5973 0.6907 0.7370
Dataset3 59 71 74 51
132 116 95 80
0.1312 0.2359 0.3735 0.4724
Dataset4 125 137 123 128
187 169 188 159
0.2687 0.3388 0.2649 0.3778
0.3845 0.4769 0.5066 0.6099
TABLE III: Empirical comparison of the proposed approach on four multi-camera tracking datasets. The bold indicates the best performance.

V-C Global Graph Model vs Two-Step Framework

The advantage of the proposed method is to improve the ICT performance under an unperfect SCT result. So in this section, the proposed global graph model is compared with the traditional two-step framework, i. e. a SCT approach plus an ICT approach. We use the same MAP model to solve the data association in both SCT and ICT steps in the two-step framework and aim to remove the interference of different data association methods. Adopting the MAP model in SCT is presented in Zhang et al.  [21]. However using MAP model in ICT is not a suitable solution when the tracking results in single camera are perfect and unchangeable. But as we said in Section. II-B, when the SCT results are not ideal, the data association in ICT should be more like a global optimization problem rather than a K-partite graph matching problem, which can be solved by the MAP model. That’s another reason why we use the MAP model to achieve the data association in ICT in the traditional two-step framework. As a complement, we also utilize Hungary algorithm [47] to achieve the ICT step, which is a classical data association method for ICT. The feature representation in this experiment is the PMCSHR appearance and motion features for all baselines due to the fairness reason.

In this experiment, the waiting time threshold and the minimum value of the MUG are set to 60*25*1 and 0.4 respectively, the weights of two features and are both 1. To prove the ability of the proposed approach handling unperfect tracklets in SCT, the experiment changes the threshold of the confidence of the AIF tracker to produce more fragments artificially. The threshold ranges from 0 to 0.2 and the corresponding numbers of tracklets are listed beside the threshold in Fig. 9.

The total single-camera matching number and inter-camera matching number of ground truthes for each sub-dataset are listed in Table II. From the first two rows in Fig. 9, we can see that with the increase of the fragmented tracklet number, both the single camera mismatch number and the inter-camera mismatch number grow significantly in the proposed global graph and the two-step framework. In the first row, the single camera mismatch number in the proposed global graph is always larger than that in the two-step framework [21], because the two-step framework offers an optimization in each camera which makes it have a better local result. In dataset 3 and dataset 4, the in the proposed global graph becomes lower than that in the two-step framework [21]. The reason is that these two datasets are under a simulation condition which have many frequent “walking around” behaviors. In this case, the inter-camera information may provide more useful feedbacks for each specific camera and can partly improve the SCT performance. For the inter-camera mismatch number in the middle row, the number in the proposed global graph is much lower than that in both MAP and Hungary graph [47] in the two-step framework, it indicates the effectiveness of our global graph model to improve the ICT performance. In dataset 4, it can be seen that the in the proposed graph is not smaller than that in the two-step framework at first time. However, with the increase of fragmented tracklets, the in the proposed graph increases much more slowly and finally becomes smaller than that in the two-step framework. What’s more, as the ICT step in two-step framework, the data association method based on the global MAP is always better than that with Hungary algorithm [47]. It can partly prove the assumption that the data association in ICT is more suitable to be treat as a global optimization problem rather than a K-partite graph matching problem because of non-ideal SCT results. In the last row, the MCTA of the global MAP always keep the highest score, which implies that the proposed global graph model offers a better performance compared with the traditional two-step framework.

Ours USC-Vision Hfutdspmct CRIPAC-MCT
[32]+[41] [54] [19]
Dataset1 55 27 86 113
0.8353 0.9152 0.7425 0.6617
Dataset2 121 34 141 167
0.7034 0.9132 0.6544 0.5907
Dataset3 39 70 40 44
0.7417 0.5163 0.7368 0.7105
Dataset4 157 72 155 110
0.3845 0.7052 0.3945 0.5703
0.6662 0.7625 0.6321 0.6333
TABLE IV: Performance comparison using the ground truthes of single camera object tracking as input.

V-D Equalized vs Non-equalized Graph Model

This experiment is conducted to prove the effectiveness of the similarity equalization process. All the trackers are under our global graph model. We compare the equalized appearance similarity metric with the non-equalized one and then combined with our equalized motion metric. Particularly, in this experiment, the confidence threshold of the AIF tracker is fixed and set to 0.

The results are shown on Table III. NonA and EqlA are the results with the non-equalized and the equalized appearance features. M is corresponding to the results with the equalized motion feature only and EqlA+M is the one that combines the equalized appearance feature and the motion feature together. It can be found that the result with the non-equalized appearance similarity has a lower mismatch number in the single camera compared with the equalized one. It means that when we conduct equalization, the single camera performance drops down due to the change of the distribution of the single camera similarity, and that is unavoidable but acceptable. In the inter-camera tracking, it is clear that the equalized appearance similarity tracker gives a great help to reduce the number of mismatches across cameras. When the equalized motion information is added in, the further decreases. The MCTA is the final comprehensive score which takes both SCT and ICT performances into account. The larger the score is, the better performance the tracker has. As seen in Table III, the equalized appearance similarity result combined with the equalized motion information has a highest score. It indicates that the increased single camera mismatch number in our method is acceptable in order to reduce the inter-camera mismatch number and get a higher score in the whole MCT performance. Further more, when we use the motion feature alone for the multi-camera object tracking, the performance is comparable and sometimes better than the appearance feature, which partly proves the effectiveness of our equalized motion similarity metric.

Ours USC-Vision Hfutdspmct CRIPAC-MCT
[32]+[41] [54] [19]
Dataset1 66 63 77 135
49 35 84 103
0.8525 0.8831 0.7477 0.6903
Dataset2 93 61 109 230
107 59 140 153
0.7370 0.8397 0.6561 0.6234
Dataset3 51 93 105 147
80 111 121 139
0.4724 0.2427 0.2028 0.0848
Dataset4 128 70 97 140
159 141 188 209
0.3778 0.4357 0.2650 0.1830
0.6099 0.6003 0.4679 0.3954
TABLE V: Performance comparison using the ground truthes of object detection as input.

V-E Equalized Global Graph Model vs State of The Arts

In this section, we compare our equalized global MAP graph model with other multi-camera object tracking methods. As a comparison, the methods must contain the abilities to handle both the SCT and the ICT steps. We compare the proposed graph with current two-step multi-camera object tracking methods. The methods are from the Multi-Camera Object Tracking (MCT) Challenge [54]. USC-Vision ( [32, 41]) is the winner in the challenge which is considered as the state-of-the-art two-step multi-camera object tracking approach. We first conduct the comparison under the condition that the ground truthes of single camera object tracking are available, the results are shown in Table IV. It reflects the ICT power of each method when the single camera object tracking results are perfect. From the average MCTA score we can see that USC-Vision ([32, 41]) is much better than our proposed method. This proves the advantage of USC-Vision’s ICT method. In Table V, only the ground truthes of object detections are available, the tracker should achieve the single camera object tracking by themselves. On this occasion, their results of the single camera object tracking can’t be as perfect as the ground truthes, and their inter-camera object tracking algorithms have to bear these fragments and false positives. From Table V, although the SCT performance of USC-Vision ([32, 41]) is better than ours, it is clear that the number of its ICT mismatches increases much more shapely than our method’s, which indicates that its powerful ICT method loses its advantage under the unperfect SCT results. Results are shown in Fig. 10. As the final evaluation, our equalized global graph model has the highest average MCTA score, which further proves the advantage of our proposed model on improving the ICT performance under an unperfect SCT result. At last, as perfect detection can never be achieved in reality, we do another experiment without the detection ground truthes. We uses the DPM detector [59] to get the detection results. In Table VI the and corresponding to Eq. 29 are listed instead of because the different detection results would cause different s. From the results in Table VI, it shows that our result is not the best but can be comparable with the state of the arts. Under a real detector, there would be much missing and false positive detections. The ability of a multi-camera tracker to handle these missing or false positive detections mainly comes from its SCT part. USC-Vision uses a hierarchical association to build its tracklets, in which the detections are selected discreetly and some missing detections can be partly complemented. In our method, a real-time single object tracker [60] is adopted to get the tracklets, which can partly handle missing detections. But for the false detections, once the tracker drifts to a false detection, it would cause the whole tracklet unreliable. Due to the benefits of the hierarchical association in the SCT step, USC-Vision has a more reliable set of tracklets than those we have for the next ICT step. Even with the help of the proposed equalized global graph, our final result is still a little lower than USC-Vision’s. This can’t deny the effectiveness of our equalized global graph model, but prove the advantage of USC-Vision’s SCT method to handle misses and false positives. However, for practical usages in real environment, the detection-level association is much slower than a real-time single camera tracker. That’s why we use the AIF tracker to get tracklets in our method instead of using USC-Vision’s detection-based hierarchical association. Some other single object trackers, such as TLD [66], may handle the false-detection drifts by their online learning mechanisms. But it costs too much time and memories on learning the online models, which is hard to be applied on forming our raw tracklets. As a result, a real-time single camera tracker that can deal with the false detections is a promising further work for multi-camera object tracking.

Ours USC-Vision Hfutdspmct CRIPAC-MCT
[32]+[41] [54] [19]
Dataset1 0.7967 0.6916 0.7113 0.1488
0.5929 0.6061 0.3465 0.2154
0.9744 0.9981 0.9229 0.9955
0.6220 0.9288 0.6534 0.7111
0.4120 0.5989 0.2810 0.1246
Dataset2 0.7977 0.6948 0.7461 0.1431
0.6332 0.7843 0.3669 0.1933
0.9779 0.9986 0.9347 0.9945
0.6942 0.8507 0.6122 0.7510
0.4793 0.6260 0.2815 0.1075
Dataset3 0.8207 0.4750 0.3342 0.0853
0.5345 0.6615 0.0986 0.1206
0.9749 0.9904 0.9682 0.9715
0.2953 0.1014 0.2432 0.1143
0.1864 0.0555 0.0359 0.0111
Dataset4 0.8355 0.5216 0.7720 0.0606
0.6193 0.79375 0.1210 0.0944
0.9275 0.9948 0.9865 0.9762
0.4308 0.5437 0.2944 0.2950
0.2842 0.3404 0.0608 0.0213
0.3405 0.4052 0.1648 0.0661
TABLE VI: Performance comparison without the ground truthes of object detection. The final MCTA is shown as bold for clarity.
(a)Dataset 1: Outdoor scene (b)Dataset 2: Outdoor scene (c)Dataset 3: Indoor scene (d)Dataset 4: outdoor scene
Fig. 10: Samples of mismatches in the inter-camera tracking. Four inter-camera tracking examples from Dataset 1-4 are shown in (a)-(d). The first row is the results of the proposed method, and the second row is from USC-Vision ([32, 41]). The red rect means the mismatch happened across cameras.

Vi Conclusion

In order to address the problem of multi-camera non-overlapping visual object tracking, we develop a joint approach that optimising the single camera object tracking and the inter-camera object tracking in one graph. This joint approach overcomes the disadvantages in the traditional two-step tracking approaches. In addition, the similarity metrics of both appearance and motion features in the proposed global graph are equalized. The equalization further reduces the number of mismatch errors in inter-camera object tracking. The results show its effectiveness for multi-camera object tracking, especially when the SCT performance is not perfect. Our approach focuses on the graph modeling instead of the feature representation learning. Any existing re-identification feature representation method can be incorporated into our framework.

References

  • [1] R. Vezzani, D. Baltieri, and R. Cucchiara, “People reidentification in surveillance and forensics: A survey,” ACM Comput. Surv., vol. 46, no. 29, 2013.
  • [2] A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: An experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 36, no. 7, pp. 1442–1468, 2014.
  • [3] Y. Pang and H. Ling, “Finding the best from the second bests c inhibiting subjective bias in evaluation of visual tracking algorithms,” in IEEE International Conference on Computer Vision (ICCV), 2013.
  • [4] Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
  • [5] K. Huang, L. Wang, T. Tan, and S. Maybank, “A real-time detecting tracking distant objects system for night surveillance,” Pattern Recognition, vol. 41, no. 1, pp. 432–444, 2008.
  • [6] K. Huang and T. Tan, “Vs-star: a visual interpretation system for visual surveillance,” Pattern Recognition Letters (PRL), pp. 2265–2285, 2010.
  • [7] P. L. Venetianer and H. Deng, “Performance evaluation of an intelligent video surveillance system - a case study,” Computer Vision and Image Understanding (CVIU), vol. 114, no. 11, pp. 1292–1302, 2010.
  • [8] X. Wang, “Intelligent multi-camera video surveillance: A review,” Pattern Recognition Letters (PRL), vol. 34, pp. 3–19, January 2012.
  • [9] J. Liu, P. Carr, R. T. Collins, and Y. Liu, “Tracking sports players with context-conditioned motion models,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
  • [10] M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. J. V. Gool, “Online multiperson tracking-by-detection from a single, uncalibrated camera,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 33, pp. 1820–1833, 2011.
  • [11] C.-H. Kuo and R. Nevatia, “How does person identity recognition help multi-person tracking?” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 1217–1224.
  • [12] R. Hamid, R. Kumar, M. Grundmann, K. Kim, I. Essa, and J. Hodgins, “Player localization using multiple static cameras for sports visualization,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 731–738.
  • [13] X. Chen, K. Huang, and T. Tan, “Object tracking across non-overlapping views by learning inter-camera transfer models,” Pattern Recognition, vol. 47, pp. 1126–1137, March 2014.
  • [14] Y. Cai, W. Chen, K. Huang, and T. Tan, “Continuously tracking objects across multiple widely separated cameras,” in Asian Conference on Computer Vision.   Springer, 2007, pp. 843–852.
  • [15] A. Segal and I. Reid, “Latent data association: Bayesian model selection for multi-target tracking,” in IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2904–2911.
  • [16] C. Arora and A. Globerson, “Higher order matching for consistent multiple target tracking,” in IEEE International Conference on Computer Vision (ICCV), 2013, pp. 177–184.
  • [17] A. Butt and R. Collins, “Multi-target tracking by lagrangian relaxation to min-cost network flow,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1846–1853.
  • [18] J. Kwon and K. Lee, “Minimum uncertainty gap for robust visual tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2013, pp. 2355–2362.
  • [19] W. Chen, L. Cao, X. Chen, and K. Huang, “A novel solution for multi-camera object tracking,” in IEEE International Conference on Image Processing (ICIP), 2014, pp. 2329–2333.
  • [20] M. Piccardi and E. Cheng, “Multi-frame moving object track matching based on an incremental major color spectrum histogram matching algorithm,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2005, p. 19.
  • [21] L. Zhang, Y. Li, and R. Nevatia, “Global data association for multi-object tracking using network flows,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
  • [22] X. Chen, Z. Qin, L. An, and B. Bhanu, “An online learned elementary grouping model for multi-target tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1242–1249.
  • [23] A. Zamir, A. Dehghan, and M. Shah, “Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs,” in European Conference on Computer Vision (ECCV), 2012, pp. 343–356.
  • [24] Y. Li, C. Huang, and R. Nevatia, “Learning to associate: Hybridboosted multi-target tracker for crowded scene,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 2953–2960.
  • [25] M. Yang, Y. Liu, L. Wen, Z. You, and S. Li, “A probabilistic framework for multitarget tracking with mutual occlusions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [26] H. Possegger, T. Mauthner, P. M. Roth, and H. Bischof, “Occlusion geodesics for online multi-object tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [27] S. Tang, M. Andriluka, A. Milan, K. Schindler, S. Roth, and B. Schiele, “Learning people detectors for tracking in crowded scenes,” in IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1049–1056.
  • [28] C. Dicle, O. Camps, and M. Sznaier, “The way they move: Tracking multiple targets with similar appearance,” in IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2304–2311.
  • [29] S. Bae and K. Yoon, “Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [30] B. Wang, G. Wang, K. Chan, and L. Wang, “Tracklet association with online target-specific metric learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1234–1241.
  • [31] L. Wen, W. Li, J. Yan, Z. Lei, and S. Yi, D. amd Li, “Multiple target tracking based on undirected hierarchical relation hypergraph,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1282–1289.
  • [32] C. Huang, B. Wu, and R. Nevatia, “Robust object tracking by hierarchical association of detection responses,” in European Conference on Computer Vision (ECCV), 2008, pp. 788–801.
  • [33] J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking using k-shortest paths optimization,” pami, vol. 33, pp. 1806–1819, 2011.
  • [34] A. Andriyenko and K. Schindler, “Multi-target tracking by continuous energy minimization,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 1265–1272.
  • [35] H. Jiang, S. Fels, and J. Little, “A linear programming approach for multiple object tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007, pp. 1–8.
  • [36] B. Yang and R. Nevatia, “An online learned crf model for multi-target tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2034–2041.
  • [37] X. Wang, E. T retken, F. Fleuret, and P. Fua, “Tracking interacting objects optimally using integer programming,” in European Conference on Computer Vision (ECCV), 2014, pp. 17–32.
  • [38] R. Pflugfelder and H. Bischof, “People tracking across two distant self-calibrated cameras,” in IEEE Conference on Advanced Video and Signal Based Surveillance, 2007, pp. 393–398.
  • [39] W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, and S. Maybank, “Principal axis-based correspondence between multiple cameras for people tracking,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 28, pp. 663–671, 2006.
  • [40] S. Khan and M. Shah, “A multiview approach to tracking people in crowded scenes using a planar homography constraint,” in European Conference on Computer Vision (ECCV), 2006, pp. 133–146.
  • [41] Y. Cai and G. Medioni, “Exploring context information for inter-camera multiple target tracking,” in IEEE Winter Conference on Applications of Computer Vision, 2014, pp. 761–768.
  • [42] C. Kuo, C. Huang, and R. Nevatia, “Inter-camera association of multi-target tracks by on-line learned appearance affinity models,” European Conference on Computer Vision (ECCV), vol. 6311, pp. 383–396, 2010.
  • [43] B. Matei, H. Sawhney, and S. Samarasekera, “Vehicle tracking across nonoverlapping cameras using joint kinematic and appearance features,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 3465–3472.
  • [44] S. Zheng, J. Zhang, K. Huang, R. He, and T. Tan, “Robust view transformation model for gait recognition,” in IEEE International Conference on Image Processing (ICIP), 2011.
  • [45] S. Zheng, B. Xie, K. Huang, and D. Tao, “Multi-view pedestrian recognition using shared dictionary learning with group sparsity,” in Interational Conference on Neural Information Processing (ICONIP), 2011.
  • [46] O. Javed, Z. Rasheed, K. Shafique, and M. Shah, “Tracking across multiple cameras with disjoint views,” in IEEE International Conference on Computer Vision (ICCV), 2003, pp. 952–957.
  • [47] H. Kuhn, “Variants of the hungarian method for assignment problems,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 3, pp. 253–258, 1956.
  • [48] A. Das, A. Chakraborty, and A. Roy-Chowdhury, “Consistent re-identification in a camera network,” in European Conference on Computer Vision (ECCV), 2014, pp. 330–345.
  • [49] R. Zhao, W. Ouyang, and X. Wang, “Learning mid-level filters for person re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 144–151.
  • [50] K. Raftopoulos and M. Ferecatu, “Noising versus smoothing for vertex identification in unknown shapes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 4162–4168.
  • [51] X. Wang, G. Doretto, T. Sebastian, J. Rittscher, and P. Tu, “Shape and appearance context modeling,” in IEEE International Conference on Computer Vision (ICCV), 2007, pp. 1–8.
  • [52] O. Hamdoun, F. Moutarde, B. Stanciulescu, and B. Steux, “Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences,” in IEEE International Conference on Distributed Smart Cameras, 2008, pp. 1–6.
  • [53]

    W. Li, R. Zhao, T. Xiao, and X. Wang, “Deepreid: Deep filter pairing neural network for person re-identification,” in

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 152–159.
  • [54] “Multi-camera object tracking challenge,” http://mct.idealtest.org.
  • [55] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multicamera people tracking with a probabilistic occupancy map,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 30, pp. 267–282, 2008.
  • [56] S. Yu, Y. Yang, and A. Hauptmann, “Harry potter’s marauder’s map: Localizing and tracking multiple persons-of-interest by nonnegative discretization,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3714–3720.
  • [57] L. Leal-Taixe, G. Pons-Moll, and B. Rosenhahn, “Branch-and-price global optimization for multi-view multi-target tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 1987–1994.
  • [58] M. Hofmann, D. Wolf, and G. Rigoll, “Hypergraphs for joint multi-view reconstruction and multi-object tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3650–3657.
  • [59] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 36, no. 9, pp. 1627–1645, 2010.
  • [60] W. Chen, L. Cao, J. Zhang, and K. Huang, “An adaptive combination of multiple features for robust tracking in real scene,” in IEEE International Conference on Computer Vision Workshops, 2013, pp. 129–136.
  • [61] “Pets2009 dataset,” http://www.cvg.rdg.ac.uk/PETS2009/.
  • [62] “Caviar dataset,” http://homepages.inf.ed.ac.uk/rbf/CAVIAR/.
  • [63] “Tud dataset,” http://www.d2.mpi-inf.mpg.de/node/428/.
  • [64] “i-lids mcts,” http://www.itl.nist.gov/iad/mig/tests/trecvid/2008/.
  • [65] B. Keni and S. Rainer, “Evaluating multiple object tracking performance: the clear mot metrics,” EURASIP Journal on Image and Video Processing, 2008.
  • [66] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), vol. 34, pp. 1409–1422, 2012.