SOT for MOT

12/04/2017 ∙ by Qizheng He, et al. ∙ Megvii Technology Limited Tsinghua University 0

In this paper we present a robust tracker to solve the multiple object tracking (MOT) problem, under the framework of tracking-by-detection. As the first contribution, we innovatively combine single object tracking (SOT) algorithms with multiple object tracking algorithms, and our results show that SOT is a general way to strongly reduce the number of false negatives, regardless of the quality of detection. Another contribution is that we show with a deep learning based appearance model, it is easy to associate detections of the same object efficiently and also with high accuracy. This appearance model plays an important role in our MOT algorithm to correctly associate detections into long trajectories, and also in our SOT algorithm to discover new detections mistakenly missed by the detector. The deep neural network based model ensures the robustness of our tracking algorithm, which can perform data association in a wide variety of scenes. We ran comprehensive experiments on a large-scale and challenging dataset, the MOT16 benchmark, and results showed that our tracker achieved state-of-the-art performance based on both public and private detections.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multiple object tracking is the problem of automatically identifying multiple objects in a video and representing them as a set of trajectories with high accuracy. It is an important problem in computer vision because it can play a fundamental role in various applications,

e.g. in automatic driving and surveillance video processing. In this paper we mainly focus on pedestrian tracking in video, but with a general object detector, our method can be easily generalized to deal with general types of objects.

Various ways have been proposed to solve the multiple object tracking problem. However, thanks to the rapid development of deep learning based object detection methods, most of recent state-of-the-art researches has focused on the framework of tracking-by-detection. After getting all detection hypotheses in the video, the tracking problem then becomes a data association problem to combine detections of the same object into a corresponding trajectory.

Though during the past years more and more advanced tracking-by-detection algorithms have been proposed, we observe there are still some parts in the framework that need improvement. First, we notice that the tracking results heavily rely on the quality of detections. However, detectors may fail to detect objects in some moments,

e.g

. when objects are in crowded scenes or when they are partially occluded. This problem cannot be completely solved by nowaday detectors, and is difficult all the way because the detector treats each frame of the video separately. Few previous tracking-by-detection trackers try to solve this problem, so when the detector fails, most tracking algorithms are only possible to find those missing detections by coarse methods like interpolation within trajectories, which needs improvement. Second, a reliable appearance model is crucial to data association. Most previous works provide hand-crafted affinity measurements as their appearance models, which may not be robust enough.

Based on the tracking-by-detection framework, this paper makes the following contributions: First, in order to further reduce the number of false negatives generated by the detector, we innovatively use single object tracking within the framework of multiple object tracking, which to our knowledge is the first to explore this method. Second, instead of a hand-crafted appearance model used in many previous works, we use a deep-neural network based appearance model throughout the whole framework, which can provide robustness in complex tracking scenes. Our experiments show with the help of our appearance model, we can pay little attention to spatial-temporal constraints on tracking problems, and still achieve state-of-the-art performance by a reliable affinity measure based on appearance.

Figure 1: An overview of our tracking framework.

2 Related work

Various solutions have been proposed for the MOT problem. One possible way is the recurrent neural network (RNN). Milan

et al. [30] provided an end-to-end deep learning approach using RNN, which can predict target existence states and perform data association in a whole network. However, though attracting as it sounds, their results are currently still not comparable with other state-of-the-art algorithms in performance. There are also other approaches, like MCMC-based particle filtering [7, 19].

Recent researches have focused on the tracking-by-detection framework, which has the following advantages: a) it can automatically detect new objects when they appear in the scene, and when they leave the scene their detections naturally disappear, b) because of the detector treats each frame in the video separately, camera motion will have little impact on detection hypotheses generation in the first place, and c) with strong category information given by the detector, it can effectively prevent bounding boxes from drifting to the background.

An example of such approach is bipartite matching. It uses nodes in a graph to represent detections, uses edge weights to represent affinity between two detections, and then find the optimal matching between two consecutive frames. Its advantages are obvious: it is efficient because the Kuhn-Munkres algorithm proposed by Kuhn and Harold [22] for bipartite matching runs in polynomial time, and it’s an online algorithm. However, online tracking algorithms have their limits, and as Choi [6] pointed out, global or batch tracking methods have advantage over online methods by optimizing through a larger number of consecutive frames.

As a more advanced version, Zhang et al. [51] viewed the MOT problem as an MAP data association problem, and optimized using min-cost network flow in a global way. The algorithm for min-cost network flow is also polynomial. Though under their model the network flow solution is optimal, the model itself has limitations. For example, it only considers affinity between consecutive detections of one object and does not take higher order relationship within a trajectory into account. Another weakness is though they proposed an occlusion model, their method can hardly recover the trajectory if the object suffers a relatively long time of occlusion. After their work, Pirsiavash et al. [33] proposed an algorithm that can find a sub-optimal solution even faster using dynamic programming, and Butt and Collins [5] considered trajectory smoothness constraints by introducing lagrangian relaxation to min-cost network flow.

There are more complex models based on the framework of tracking-by-detection. Milan et al. [31] made a more complete problem formulation for tracking and defined a complicated objective function. Though hard to find the optimal solution, they showed a sub-optimal solution can outperform optimal solutions in weaker models. [47, 48] also view the MOT problem in an energy minimizing aspect, and solved using CRF models. [16, 46] provided other ways to design data association methods.

Some other researchers consider all pairwise affinities between detections in a trajectory, representing each trajectory as a clique. The MOT problem then becomes a multi-clique problem or equivalently a graph partitioning problem or multicut problem, similar detections of the same object can be clustered spatially and temporally. [9, 18, 36, 39, 40, 50] are examples of this approach. The multi-clique problem can be solved optimally by binary integer programming [36]

, or find a nearly optimal solution by heuristic approaches

[39]. There are other methods that introduced high order terms in a graph model, like [6], which used a CRF model to find the solution for their graphical model. These methods also achieved high performance.

The problem of single object tracking is different from multiple object tracking. A common problem formulation of SOT is to find a short-term trajectory of a single object (foreground), only the bounding box of the tracked object in the first frame is given, and the solution is always without re-detection. Other objects appear in the scene (even of the same category) are all considered as background. [3, 28, 32, 35, 41, 44, 45, 53] and etc. proposed solutions for the SOT problem.

3 Proposed framework

Our whole tracking framework is illustrated in Fig. 1. Our algorithm runs like this: first we get all detections from the detector (Sec. 3.2), then perform single object tracking on all detections (Sec. 3.3), which results in an enlarged detection set , next we perform our multiple object tracking algorithm on the new detection set (Sec. 3.4), finally we perform a post-processing process (Sec. 3.5) and get the final tracking result.

3.1 Notations

A video sequence of frames is notated as . In each frame there is a set of detection hypotheses generated by the detector, represented as . A detection consists of its frame number , its score and its bounding box , where is the top-left point of the bounding box, is the width and is the height. A trajectory is defined as a set of detections of the same object, which is notated as . Note that we do not have the constraint that are from consecutive frames, for example a trajectory can represent an object’s presence in frame and , without its presence in frame . A tracklet is informally defined as a short-term and continuous trajectory.

Let be the image within detection ’s bounding box, our appearance model can calculate its feature . To increase the robustness, let be the horizontally flipped image within ’s bounding box, the feature of detection is calculated as

(1)

Two detections and have affinity score

(2)

is bounded in the range .
A trajectory ’s feature is defined as the average feature of all its detections, i.e.

(3)

Trajectory and detection have affinity score

(4)

3.2 Detection

Detection is the first step of the tracking-by-detection framework. To emphasize the importance of detection for tracking, we first introduce MOTA, a widely used measurement for multiple object trackers, whose definition in [29] is as the following:

(5)

Yu et al. [49] noticed that the sum of false positives (FP) and false negatives (FN) strongly affects the value of MOTA, which means the performance of a tracking algorithm is strongly based on the quality of detections.

A state-of-the-art detector with low FPs and FNs is of great help to the whole MOT algorithm, which is indicated by the definition of MOTA. However, detectors have their limitations. If we try to analyze the source of false negatives produced by detectors, when objects are crowded or partially occluded, they may fail to provide detections of objects, thus result in FNs. It’s the tracker’s duty to further reduce the number of FPs and FNs by a higher order of image and spatial-temporal information. In the next part, we present our SOT algorithm that can help to find some FNs in the detection. Fig 2 gives an illustration for our SOT algorithm’s application.

Figure 2: Public detections (bounding boxes in solid lines) in MOT16-05 #15 (left), #16 (right). By tracking detections in frame #15, our SOT algorithm can help to find objects missed by the detector in frame #16 (dotted bounding boxes), thus reducing the number of FNs. The figure is best shown in color.

3.3 SOT algorithm

There are plenty of works related to the study of single object tracking algorithms, however, to our knowledge there is no previous work gained success by directly using SOT algorithms to improve the performance of MOT algorithms.

One might think we can simply use SOT algorithms on each object in the MOT problem to get a solution for MOT. One reason that it cannot be easily used is model drift. It happens commonly when the recorded appearance of an object in the model slowly drifts away from its real appearance, and will finally cause the bounding box drift to the background, making it hard to terminate the trajectory. Another reason is it does not consider the interaction between objects, which is important when we have multiple objects to track. One more reason is sometimes we even do not know when to start tracking when the object appears for the first time.

Tao et al. [41] present a single object tracker based on a siamese deep neural network and show with the robust matching function learned by the network, a simple tracker can achieve state-of-the-art on SOT benchmarks. Inspired by their work, we propose a variant of SOT algorithm as one step in our whole MOT framework, immediate after the detection step. The aim of this step is to further reduce the number of FNs in the detection, without significantly increase the number of FPs.

Detection candidate sampling Here we design a function . Suppose the detector gives a detection of an object (this implicitly assumes is a true positive, which is our assumption), we want to find a detection as the result, which corresponds to object on the frame with frame number (if really appears in that frame). is the provided prior knowledge of the location of ’s bounding box. We first generate a set containing detection candidates in the -th frame, using dense sampling on the distribution of the location and scale of ’s bounding box:

(6)

Note here our assumptions may be violated, e.g. the time when we call the function with being a FP. The problems followed will be dealt with in our MOT algorithm.

Detection candidate matching Using our deep appearance model introduced in Sec. 4, we can find the best matched detection among all detection candidates , using the appearance similarity between and :

(7)

Finally if is larger than a pre-defined threshold , the function returns as the result. Otherwise it returns , meaning that object does not appear in the -th frame.

With a carefully trained appearance model, the detection candidate sampled with a more precise location will have appearance similarity score larger than other candidates that deviated from the true location of the object. This property enables our SOT algorithm to find object in new frames with good localization accuracy.

SOT with multiple object interaction Here we present the proposed SOT algorithm in algorithm 1, where we have considered multiple object interactions, and it runs in two directions on time axis, within a batch of frames that contain detection set . We use , , and .

1:  initialize
2:  initialize empty priority queue
3:  , put the backward tracklet end and the forward tracklet end into Q
4:  while  not empty do
5:     extract a valid tracklet end with largest score from
6:     if  then
7:        if then continue
8:        if  where and and  then
9:           merge two tracklet ends and , update
10:           continue
11:        end if
12:        
13:        if  or where  then
14:           continue
15:        end if
16:        
17:        merge with current tracklet end
18:        insert into Q
19:     else
20:        a similar process with direction being backward
21:     end if
22:  end while
Algorithm 1 proposed SOT algorithm

After running this algorithm, we will get a set of tracklets within the batch, which may contain new detections found by the SOT algorithm.

Here are explanations of this algorithm. First, extending tracklet endpoints with decreasing order in score can naturally discover high-quality tracklets in the beginning, therefore effectively prevent FPs from forming longer tracklets. Second, the reason that we use a previously confident detection rather than as the standard appearance for the object is to prevent model drift, which commonly happens in traditional SOT algorithms. If , we think the object is no longer visible.

This algorithm will certainly find new FPs that are similar to existing FPs in appearance, however it is not hard to discriminate them in the following MOT algorithm part.

Bounding box regression We use bounding box regression on SOT results from private detection, and observe by doing this the number of FP generated by the SOT algorithm is significantly cut down. In tracking results from public detection we do not perform this step though it would bring significant improvement on MOTA, because we think doing this will make it unfair to compare with other tracking algorithms.

Naive template matching implementation As an argument to show the ability of our SOT step, we tried a naive implementation using template matching instead of our deep neural network based appearance model. We used the function gpu::matchTemplate() as an alternative appearance model, which is provided by opencv using the correlation between two patches of images. This affinity measure is similar to [4], and is simple and fast. Our results show even with a low-quality matching function, performing the SOT step can still improve the performance of the whole MOT algorithm.

3.4 MOT algorithm

In this part we introduce our proposed MOT framework. Our algorithm works in a batch, i.e. let be the batch size, we perform data association in a temporal window . The batch size we use in our algorithm is related to the FPS of the video, i.e. each batch has time length second. As the start, we first perform NMS on all detections generated by the SOT step.

Tracklet generation Many of recent works used the pipeline of first generate consecutive tracklets then merge them into longer trajectories in their tracking framework. For example, Wang et al. [43]

generate tracklets based on posteriori probabilities, and find the solution by successive shortest path algorithm. Choi

[6] generate tracklets by greedily find the detection that matches best with any detection within the current tracklet according to their appearance model, the ALFD metric, then merge that detection into the tracket. [17, 34, 42] are other works that introduced tracklet generation. An advantage of this pipeline is it naturally solves the occlusion problem, with the help of a powerful appearance model that can link spatial-temporal separated tracklets of the same object together.

Now we present our tracklet generation method. Let be the set of tracklets formed before time . We construct a bipartite graph , each node represent a tracklet , each node represent a detection hypotheses , and . The edge weight between and is defined as

(8)

where contains the last detections of .

We perform bipartite matching on using Kuhn-Munkres algorithm [22] and get a new set of tracklets by merging the matched tracklets and detections. also contain those unmatched detections by regarding them as tracklets of length . By letting we repeatedly perform bipartite matching in the batch. Finally we delete tracklets with length ().

Long-term trajectory association After the tracklet generation step, we will gets a set of trajectories . We repeatedly find trajectory and with the maximum appearance affinity which satisfy the association constraints , until there are no such pairs:

(9)

We merge and into a new trajectory, if they temporally overlap on frame then we choose the detection with larger score on . This step heavily relies on the powerful ability of our appearance model.

Association constraints Now we introduce our association constraints for two trajectories and :
1) IOU constraint: and may overlap on some time segments. Suppose in one time segment there are detections where . The IOU constraint requires for each overlapping time segment the average IOU of detections should be larger than a threshold , i.e.

(10)

2) Suppose in the trajectory there’s a gap , i.e. where

. We think in a video of a moving scene the velocity estimation is unreliable, therefore we use a weak spatial-temporal constraint: Let

be the estimated velocity in the gap using linear velocity assumption, then should satisfy

(11)

where .
3) Gap length constraint: all gaps within should have length less or equal than .

3.5 Post processing

Interpolation The trajectories generated by the previous steps may consist of several continuous parts separated by temporal gaps, which commonly happens when an object is occluded for some time or the detector missed detections. If occlusion occurs, our algorithm has the ability to link trajectories of that object before and after the gap based on our powerful appearance model. We use linear velocity assumption to interpolate detections within the gap.

Smooth We also adopt refinements based on smoothness. Let be a trajectory, and detection , we use detections in whose frame number within a sliding temporal window to revise the location of (we set

). This step can make the trajectory more smooth and refine those outliers mislinked into the trajectory.

4 Appearance model

A high-accuracy appearance model is fatal to designing a robust tracker. An ideal appearance model should have the ability of determining whether two detections in two frames within a video is the same object (providing pairwise affinity measure). If the image within two detections and are from the same object then the appearance model should give an affinity value close to , otherwise it should give close to . However, relatively less literatures focused on the design of appearance models.

In previous works most researchers used hand-crafted features as their appearance models. [1, 14, 23] used HOG-features introduced by Dalal and Triggs [8], and [23] also used color histograms. Choi [6] proposed the aggregated local flow descriptor (ALFD) as their appearance model, originated from optical flow and interest point trajectories. Relatively few literatures, like [40], used deep learning methods to design matching functions. We propose a deep learning based appearance model which can lead to a better performance in more complex scenes. With sufficient training data, our model can automatically learn a generic function to produce a score of similarity using two detections’ appearance, which is robust to partial occlusion, illumination change, angle of view difference, scale variation and other appearance distortions.

Now we introduce the implementation of our appearance model. The details of the model is similar to [27]

. To be concrete, the image patch for feature extraction will be first resized to

, then used as the input of our network. In the first stage, we use a ResNet [15] structure as our dCNN component to train a classification task for objects. In the second stage, the first layers extracted by the dCNN component is then used to pass an LSTM based RNN component to extract the feature of object , which has dimension . The appearance affinity score of two detections and is calculated as the L2 distance of the features, which is mentioned before in 3.1.

For training, we use publicly available datasets, Market1501 [52] for the first classification stage and CUHK03 [26] for the second stage. We use triplet loss [38]

as our loss function in the second stage, which can ensure similar detections of the same object can have a high affinity score, and detections from different objects can have a low affinity score.

5 Experiments

We tested our tracking algorithm on the MOT16 benchmark[29]. It is a collection of existing and new data (part of the sources are from [24] and [11]), containing challenging real-world videos of both static scenes and moving scenes, for training and for testing. It is a large-scale dataset, composed of totally bounding boxes in training set and bounding boxes in test set. All video sequences are annotated under strict standards, their ground-truths are highly accurate, making the evaluation meaningful. For the evaluation, the CLEAR MOT metrics [2] is used. Note that the MOT16 benchmark carefully annotated some “ignore” classes, detecting or not detecting them will not affect the result of evaluation.

5.1 Detections used

For public detection, we used DPM v5 provided by Felzenszwalb et al. [13] as our public detection. The MOT16 Challenge has already officially presented the public detection results with detection score larger than , making it fair to compare between public trackers. We perform NMS-IOM on the detections, where the intersection over minimum (IOM) of two detections and is defined as

(12)

in [10]. As Tang et al. [39] pointed out, on public detections performing NMS with IOM is more useful than with IOU due to the property of DPM v5 [13]. We use threshold for NMS-IOM in practice.

For private detection, Yu et al. [49] introduced their high-performance detector based on Faster R-CNN, and made their private detection results publicly available. We used their private detections with detection score larger than , except for the video MOT16-04, for which we use threshold . The NMS-IOU threshold is . The above setting is the same as [49]. Note the performance of the private detector is obviously significantly better than the public detector.

[49] provided a table for detection performance evaluation on both public and private detectors.

5.2 Appearance model analysis

Method MOTA MOTP FAF MT ML FP FN ID Sw. Frag Hz
NOMT [6] 46.4 76.6 1.6 18.3% 41.4% 9753 87565 359 504 2.6
JMC [39] 46.3 75.7 1.1 15.5% 39.7% 6373 90914 657 1114 0.8
SOT+MOT 44.7 75.2 2.1 18.6% 46.5% 12491 87855 404 709 0.8
oICF [20] 43.2 74.3 1.1 11.3% 48.5% 6651 96515 381 1404 0.4
MHT_DAM [21] 42.9 76.6 1.0 13.6% 46.9% 5668 97919 499 659 0.8
LINF1 [12] 41.0 74.8 1.3 11.6% 51.3% 7896 99224 430 963 1.1
EAMTT_pub [37] 38.8 75.1 1.4 7.9% 49.1% 8114 102452 965 1657 11.8
Table 1: Public tracking results on the MOT16 benchmark test set, means higher is better and means lower is better. The meaning for evaluation measures is given in [29]. Trackers are sorted by the order of MOTA. We stress the best result under each evaluation measure.
Method MOTA MOTP FAF MT ML FP FN ID Sw. Frag Hz
SOT+MOT 68.6 78.8 2.1 43.9% 19.9% 12690 43873 737 869 0.6
KDNT [49] 68.2 79.4 1.9 41.0% 19.0% 11479 45605 933 1093 0.7
POI [49] 66.1 79.5 0.9 34.0% 20.8% 5061 55914 805 3093 9.9
MCMOT_HDM [25] 62.4 78.3 1.7 31.5% 24.2% 9855 57257 1394 1318 34.9
NOMTwSDP16 [6] 62.2 79.6 0.9 32.5% 31.1% 5119 63352 406 642 3.1
EAMTT [37] 52.5 78.8 0.7 19.0% 34.9% 4407 81223 910 1321 12.2
Table 2: Private tracking results on the MOT16 benchmark test set, means higher is better and means lower is better.
Method MOTA MOTP FP FN ID Sw. detector
SOT+MOT(priv) 67.3 81.2 5338 30266 529 private
MOT(priv) 67.0 81.3 5259 30626 524 private
SOT+MOT 41.9 77.4 4235 59757 198 public
SOT(template matching)+MOT 41.7 77.8 4201 59954 174 public
MOT 38.1 78.1 5014 63121 249 public
Table 3: Comparison of our tracking results on the MOT16 benchmark training set with different settings, means higher is better and means lower is better.

We run an analysis for our appearance model on the MOT16 training dataset, which has no overlap with our appearance model’s training data. We randomly sample positive and negative pairs of ground truth detections from the video MOT16-04. The frame number distance of two detections in a pair is unrestricted, i.e. each pair is sampled uniform randomly among the whole video. Our appearance model can give an accuracy of for positive pairs, and accuracy for negative pairs under the threshold we used in our tracker.

5.3 MOT16 Challenge evaluation

Table 1 provides a comparison between our algorithm and other state-of-the-art methods using public detections on the MOT16 benchmark test set, and table 2 provides a comparison using private detections. Our tracker outperforms other state-of-the-art algorithms using private detections on MOTA, and also has comparable performance with other state-of-the-art algorithms using public detections. Our method achieves the lowest or nearly the lowest FN using both detections, which demonstrate the ability of our SOT algorithm to reduce the number of FN. Our robust appearance model leads to a low number of ID switch.

Table 3 provides a comparison of our trackers with different settings on the MOT16 benchmark training set, where the “training set” actually plays the role of a validation set. “SOT+MOT” is the algorithm we present in this paper, and “MOT” is our tracking framework without performing the SOT step. From the table we observe our SOT algorithm can improve our tracker’s performance, regardless of the quality of detections. We observe the improvement is more significant based on public detections, because the public detector fail to detect objects more frequently than the private detector, sometimes even fail in simple scenes, resulting in more FNs that our SOT algorithm can recover. We also implemented a naive template matching algorithm for the appearance model used in SOT, which is faster than the neural network based model and easy to implement. from the table we can see even with a simple appearance model, performing the SOT step is still greatly helpful to the whole tracking algorithm.

Figure 3: Qualitative examples of our tracking results. The color of bounding boxes and the numbers represent the identity of objects. The first row is our public SOT+MOT results, the second row is our public results without SOT, the third row is our private SOT+MOT results. Our SOT algorithm can help to reduce the number of FNs. The figure is best shown in color.

Fig 3 shows some qualitative examples of our tracking results.

6 Conclusion

In this paper, we propose a novel framework that combines SOT algorithm with MOT algorithm, which can significantly reduce the number of FNs while maintaining a low number of FPs. As another contribution, we also propose a deep learning based appearance model that shows strong ability in both detecting missing detections in the SOT part and providing strong affinity measure in the MOT part. In conclusion, experiments show our tracking algorithm runs relatively fast with state-of-the-art performance, which enables the further usage of our algorithm in various applications, like automatic driving and surveillance usages, etc.

As for future works, we notice that our SOT algorithm can produce useful identity information for detections, i.e. it can link detections to tracklets, however we didn’t make good use of this information. Another way of possible improvement is the appearance model we used is primitively trained for our MOT algorithm, we directly used it for candidate matching in our SOT algorithm without altering the way of training the model, which may limit the power of our SOT algorithm.

References

  • [1] A. Andriyenko, K. Schindler, and S. Roth. Discrete-continuous optimization for multi-target tracking. In

    Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on

    , pages 1926–1933. IEEE, 2012.
  • [2] K. Bernardin and R. Stiefelhagen. Evaluating multiple object tracking performance: the clear mot metrics. EURASIP Journal on Image and Video Processing, 2008(1):1–10, 2008.
  • [3] L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, and P. Torr. Staple: Complementary learners for real-time tracking. In International Conference on Computer Vision and Pattern Recognition, 2016.
  • [4] K. Briechle and U. D. Hanebeck. Template matching using fast normalized cross correlation. In Aerospace/Defense Sensing, Simulation, and Controls, pages 95–102. International Society for Optics and Photonics, 2001.
  • [5] A. A. Butt and R. T. Collins. Multi-target tracking by lagrangian relaxation to min-cost network flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1846–1853, 2013.
  • [6] W. Choi. Near-online multi-target tracking with aggregated local flow descriptor. In Proceedings of the IEEE International Conference on Computer Vision, pages 3029–3037, 2015.
  • [7] W. Choi, C. Pantofaru, and S. Savarese. A general framework for tracking multiple people from a moving camera. IEEE transactions on pattern analysis and machine intelligence, 35(7):1577–1591, 2013.
  • [8] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 886–893. IEEE, 2005.
  • [9] A. Dehghan, S. Modiri Assari, and M. Shah. Gmmcp tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4091–4099, 2015.
  • [10] P. Dollár, Z. Tu, P. Perona, and S. Belongie. Integral channel features. In British Machine Vision Conference, BMVC 2009, London, UK, September 7-10, 2009. Proceedings, 2009.
  • [11] A. Ess, B. Leibe, and L. Van Gool. Depth and appearance for mobile scene analysis. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.
  • [12] L. Fagot-Bouquet, R. Audigier, Y. Dhome, and F. Lerasle. Improving multi-frame data association with sparse representations for robust near-online multi-object tracking. In European Conference on Computer Vision, pages 774–790. Springer, 2016.
  • [13] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645, 2010.
  • [14] M. Godec, P. M. Roth, and H. Bischof. Hough-based tracking of non-rigid objects. Computer Vision and Image Understanding, 117(10):1245–1256, 2013.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Science, 2015.
  • [16] J. Hong Yoon, C.-R. Lee, M.-H. Yang, and K.-J. Yoon. Online multi-object tracking via structural constraint event aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1392–1400, 2016.
  • [17] C. Huang, B. Wu, and R. Nevatia. Robust object tracking by hierarchical association of detection responses. In European Conference on Computer Vision, pages 788–801. Springer, 2008.
  • [18] M. Keuper, S. Tang, Y. Zhongjie, B. Andres, T. Brox, and B. Schiele. A multi-cut formulation for joint segmentation and tracking of multiple objects. arXiv preprint arXiv:1607.06317, 2016.
  • [19] Z. Khan, T. Balch, and F. Dellaert. Mcmc-based particle filtering for tracking a variable number of interacting targets. IEEE transactions on pattern analysis and machine intelligence, 27(11):1805–1819, 2005.
  • [20] H. Kieritz, S. Becker, W. Hubner, and M. Arens. Online multi-person tracking using integral channel features. In 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 122–130, 2016.
  • [21] C. Kim, F. Li, A. Ciptadi, and J. M. Rehg. Multiple hypothesis tracking revisited. In Proceedings of the IEEE International Conference on Computer Vision, pages 4696–4704, 2015.
  • [22] H. W. Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
  • [23] C.-H. Kuo, C. Huang, and R. Nevatia. Multi-target tracking by on-line learned discriminative appearance models. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 685–692. IEEE, 2010.
  • [24] L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler. MOTChallenge 2015: Towards a benchmark for multi-target tracking. arXiv:1504.01942 [cs], Apr. 2015. arXiv: 1504.01942.
  • [25] B. Lee, E. Erdenee, S. Jin, and P. K. Rhee. Multi-class multi-object tracking using changing point detection. arXiv preprint arXiv:1608.08434, 2016.
  • [26] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 152–159, 2014.
  • [27] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. End-to-end comparative attention networks for person re-identification. arXiv preprint arXiv:1606.04404, 2016.
  • [28] S. Liu, T. Zhang, X. Cao, and C. Xu. Structural correlation filter for robust visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4312–4320, 2016.
  • [29] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler. MOT16: A benchmark for multi-object tracking. arXiv:1603.00831 [cs], Mar. 2016. arXiv: 1603.00831.
  • [30] A. Milan, S. H. Rezatofighi, A. Dick, K. Schindler, and I. Reid. Online multi-target tracking using recurrent neural networks. arXiv preprint arXiv:1604.03635, 2016.
  • [31] A. Milan, S. Roth, and K. Schindler. Continuous energy minimization for multitarget tracking. IEEE transactions on pattern analysis and machine intelligence, 36(1):58–72, 2014.
  • [32] H. Nam and B. Han.

    Learning multi-domain convolutional neural networks for visual tracking.

    Computer Science, 2015.
  • [33] H. Pirsiavash, D. Ramanan, and C. C. Fowlkes. Globally-optimal greedy algorithms for tracking a variable number of objects. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1201–1208. IEEE, 2011.
  • [34] J. Prokaj, M. Duchaineau, and G. Medioni. Inferring tracklets for multi-object tracking. In CVPR 2011 WORKSHOPS, pages 37–44. IEEE, 2011.
  • [35] Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, and J. L. M.-H. Yang. Hedged deep tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [36] E. Ristani and C. Tomasi. Tracking multiple people online and in real time. In Asian Conference on Computer Vision, pages 444–459. Springer, 2014.
  • [37] R. Sanchez-Matilla, F. Poiesi, and A. Cavallaro. Online multi-target tracking with strong and weak detections. In European Conference on Computer Vision, pages 84–99. Springer, 2016.
  • [38] F. Schroff, D. Kalenichenko, and J. Philbin.

    Facenet: A unified embedding for face recognition and clustering.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815–823, 2015.
  • [39] S. Tang, B. Andres, M. Andriluka, and B. Schiele. Subgraph decomposition for multi-target tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5033–5041, 2015.
  • [40] S. Tang, B. Andres, M. Andriluka, and B. Schiele. Multi-person tracking by multicut and deep matching. In European Conference on Computer Vision, pages 100–111. Springer, 2016.
  • [41] R. Tao, E. Gavves, and A. W. Smeulders. Siamese instance search for tracking. arXiv preprint arXiv:1605.05863, 2016.
  • [42] B. Wang, G. Wang, K. L. Chan, and L. Wang. Tracklet association by online target-specific metric learning and coherent dynamics estimation. 2016.
  • [43] B. Wang, G. Wang, K. Luk Chan, and L. Wang. Tracklet association with online target-specific metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1234–1241, 2014.
  • [44] L. Wang, W. Ouyang, X. Wang, and H. Lu. Visual tracking with fully convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3119–3127, 2015.
  • [45] L. Wang, W. Ouyang, X. Wang, and H. Lu. Stct: Sequentially training convolutional networks for visual tracking. CVPR, 2016.
  • [46] L. Wen, W. Li, J. Yan, Z. Lei, D. Yi, and S. Z. Li. Multiple target tracking based on undirected hierarchical relation hypergraph. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1282–1289. IEEE, 2014.
  • [47] B. Yang, C. Huang, and R. Nevatia. Learning affinities and dependencies for multi-target tracking using a crf model. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1233–1240. IEEE, 2011.
  • [48] B. Yang and R. Nevatia. An online learned crf model for multi-target tracking. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2034–2041. IEEE, 2012.
  • [49] F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, and J. Yan. Poi: Multiple object tracking with high performance detection and appearance feature. arXiv preprint arXiv:1610.06136, 2016.
  • [50] A. R. Zamir, A. Dehghan, and M. Shah. Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In Computer Vision–ECCV 2012, pages 343–356. Springer, 2012.
  • [51] L. Zhang, Y. Li, and R. Nevatia. Global data association for multi-object tracking using network flows. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
  • [52] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision, pages 1116–1124, 2015.
  • [53] G. Zhu, F. Porikli, and H. Li. Beyond local search: Tracking objects everywhere with instance-specific proposals. arXiv preprint arXiv:1605.01839, 2016.