POI: Multiple Object Tracking with High Performance Detection and Appearance Feature

10/19/2016 ∙ by Fengwei Yu, et al. ∙ 0

Detection and learning based appearance feature play the central role in data association based multiple object tracking (MOT), but most recent MOT works usually ignore them and only focus on the hand-crafted feature and association algorithms. In this paper, we explore the high-performance detection and deep learning based appearance feature, and show that they lead to significantly better MOT results in both online and offline setting. We make our detection and appearance feature publicly available. In the following part, we first summarize the detection and appearance feature, and then introduce our tracker named Person of Interest (POI), which has both online and offline version.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Detection

In data association based MOT, the tracking performance is heavily affected by the detection results. We implement our detector based on Faster R-CNN [14]

. In our implementation, the CNN model is fine-tuned from the VGG-16 on ImageNet. The additional training data includes ETHZ pedestrian dataset

[4], Caltech pedestrian dataset [2] and the self-collected surveillance dataset (365653 boxes in 47556 frames). We adopt the multi-scale training strategy by randomly sampling a pyramid scale for each time. However, we only use a single scale and a single model during test. Moreover, we also use skip pooling [1] and multi-region [5] strategies to combine features at different scales and levels.

In considering the definition of MOTA in MOT16 [12], the sum of false negatives (FN) and false positives (FP) poses a large impact on the value of MOTA. In Table 1, we show that our detection optimization strategies lead to the significant decrease in the sum of FP and FN333We use detection score threshold 0.3 for Faster R-CNN and -1 for DPMv5 , labeling the ID of detection box with incremental integer, and evaluate FP and FN with MOT16 devkit..

Strategies FP FN FP+FN
DPMv5 28839 62353 91192
Faster R-CNN baseline 5384 47343 52727
Faster R-CNN + skip pooling 5410 46399 51809
Faster R-CNN + multi-region 4476 46738 51214
Faster R-CNN + both 8722 37865 46587
Table 1: Detection Performance Evaluation(on MOT16 train set)

2 Appearance Feature

The distance between appearance features is used for computing the affinity value in data association. The affinity value based on the ideal appearance feature should be large for persons of the same identity, and be small for persons of different identities. In our implementation, we extract the appearance feature using a network which is similar to GoogLeNet [15]. The input size of our network is , and the kernel size of layer is instead of . The output layer is a fully connected layer which outputs the 128 dimensional feature. In the tracking phase, patches are first cropped according to the detection responses, and then resized to

for feature extraction. The cosine distance is used for measuring the appearance affinity.

For training, we collect a dataset which contains nearly 119 K patches from 19835 identities. Such a dataset consists of multiple person re-id datasets, including PRW [18], Market-1501 [18], VIPeR [13] and CUHK03 [8]. We use the softmax and triplet loss jointly during training. The softmax loss guarantees the discriminative ability of the appearance feature, while the triplet loss ensures the cosine distance of the appearance features of the same identity to be small.

3 Online Tracker

We implement a simple online tracker, which uses Kalman filter

[6] for motion prediction and Kuhn-Munkres algorithm [7] for data association. The overall tracking procedure is described in Algorithm 1.

1:A new frame at the -th timestep, the detection set , and the tracklet set
2:The new tracklet set
3:

Calculate the affinity matrix

4:Divide into high tracking quality set and low quality set with threshold
5:Use Kuhn-Munkres algorithm to find the optimal matching between and based on
6:Use threshold to decide whether association success or not
7:Obtain association-success set with matched detection set , association-fail tracklet set and unmatched detection set
8:Use Kalman filter and feature aggregation to generate new tracklet subset based on association-success set: .
9:Use Kalman filter to predict or remove the association-fail tracklets with missing tracklets threshold : .
10:Initialize the unmatched detections as the new tracklets: .
11:Merge the tracklet subsets to generate new candidate tracklet set : .
12:Remove out of image border candidate tracklet set to generate new tracklet set:
Algorithm 1 Overall Procedure of the Online Tracker

In the following, we introduce the affinity matrix construction, data association method, threshold value setting and tracking quality metric.

Affinity Matrix Construction. To construct an affinity matrix for the Kuhn-Munkres algorithm, we calculate the affinity between tracklets and detections. We combine motion, shape and appearance affinity as the final affinity. Specifically, the appearance affinity is calculated based on the appearance feature described in Section 2. Details of the affinity calculation are given below:

(1)
(2)
(3)
(4)

, and indicate appearance, motion and shape affinity between the detection and tracklet, respectively. We combine these affinities with weights and as the final affinity.

Data Association. The tracklets and new detections are associated using the Kuhn-Munkres algorithm. Since the Kuhn-Munkres algorithm attempts to yield the global optimal result, it may fail when some detections are missing. To this end, we use a two-stage matching strategy, which divides into high tracking quality set and low quality set . The matching is first performed between and , and then performed between and .

Threshold Value Setting. On line 2 of Algorithm 1, we introduce to divide into high and low tracking quality set. The strategy is intuitive: we mark a tracklet with high flag whose tracking quality is higher than , other tracklets will be mark as low. On line 4, we use to mark the association to be success or fail based on the affinity value. On line 7, we use as a threshold to drop a tracklet which is lost for more than frames.

Tracking Quality Metric. Tracking quality is designed to measure whether a object is tracking well or not. We use following formula to define tracking quality:

(5)

where , with the form , is a set that contains every success association couple in history.

4 Offline Tracker

Our offine tracker an improved version of HT [16] while based on K-Dense Neighbors [11]. It is more robust and efficient than HT in handling the complex tracking scenarios. The overall procedure of the tracker is described in Algorithm 2.

1:A tracking video and the detections in all frames
2:The tracking results (trajectories of targets)
3:Divide the tracking video into multiple disjoint segments in the temporal domain
4:Use the dense neighbors (DN) search444The DN search is performed on an affinity matrix which encodes the similarity between two tracklets. Please refer to [3, 10, 11, 16] for details about DN search and its advantages over the GMCP [9, 17] as a data association method.to associate the detection responses into short tracklets in each segment
5:while The number of segments is greater than one do
6:       Merge several nearby segments into a longer segment
7:       Use the DN search in each longer segment to associate existing tracklets into longer tracklets
8:end while
Algorithm 2 Overall Procedure of the Offline Tracker

We make the following improvements over HT [16].

Appearance Representation. To construct the affinity matrix for the dense neighbors (DN) search, we need to calculate three affinities, _meaning:NTF . i.e _catcode:NTF a i.e. i.e., appearance, motion, and smoothness affinity. Among these three affinities, the appearance affinity is the most important one and we use the CNN based feature described in Section. 2, instead of the hand-crafted feature in [16].

Big Target. A scenario that HT [16] does not work well is the mixture of small and big targets. The reason is that the motion and smoothness affinities are unreliable for the big targets. Such unreliability is caused by the unsteady detection responses of the big targets. We introduce two thresholds, and , regarding the object scale to deal with this challenge, _meaning:NTF . i.e _catcode:NTF a i.e. i.e., for preventing associating detection responses from very different scale, and for determining whether to reduce the weights of motion and smoothness affinity. Specifically, if the ratio of the detection response scale and the target scale is less than , such a detection response will not be associated with the target. If the ratio of the detection response height and the image height is greater than , such a detection response will not be associated with the target. Both and are set as 0.5.

Algorithm Efficiency. HT is slow in handling the long tracking sequence where there exist plenty of targets. Among the steps in the algorithm, the step of DN search is the most time-consuming. To be more specific, the larger an affinity matrix, the longer time it will take to perform the DN search. Thus, we abandon the high-order information [16] when constructing the affinity matrix, which significantly reduces the matrix dimensions and improves the algorithm efficiency.

5 Evaluation

Our online and offline tracker are not learning based algorithm. We only tuning detection score threshold on train set and apply it to its similar scene from test set. For evaluation and submission, 0.1 is set for MOT16-03 and MOT16-04 due to high precision of detection result (03 and 04 are both surveillance scene, which is quite easy while our detector have been trained by self-collected surveillance dataset), and 0.3 is set for other sequences.

For both online555we use 0.5 for , 1.5 for , 1.2 for ,0.5 for , 0.4 for and 100 frames for . and offline tracker, we compare our detector with the official detector, and compare our feature with default CNN feature. The comparison results on MOT16 [12] train set are listed in Table 2 and Table 3, respectively. Note that our detector leads to much better results in MT, ML, FP and FN, and our feature helps reduce both IDS and FM.

Det. and Feat. MT ML FP FN IDS FM MOTA MOTP
DPMv5 + Our Feat. 7.54% 52.42% 6197 70952 784 2697 29.4 77.2
Our Det. + GoogLeNet Feat. 31.72% 16.25% 3207 35472 1541 2235 63.6 82.6
Our Det. and Feat. 37.33% 14.70% 3497 34241 716 1973 65.2 82.4
Table 2: Online Tracker Result On the Train Set
Det. and Feat. MT ML FP FN IDS FM MOTA MOTP
DPMv5 + Our Feat. 10.64% 52.80% 27238 63443 1540 1853 16.5 77.4
Our Det. + GoogLeNet Feat. 13.93% 60.93% 1258 58213 1350 2196 44.9 85.0
Our Det. and Feat. 37.52% 17.60% 2762 33327 462 717 66.9 83.3
Table 3: Offline Tracker Result On the Train Set

6 ECCV 2016 Challenge Results.

Tracker MT ML FP FN IDS FM MOTA MOTP
KFILDAwSDP (Online) 26.9% 21.6% 23266 56394 1977 2954 55.2 77.2
MCMOT-HDM (Offline) 31.5% 24.2% 9855 57257 1394 1318 62.4 78.3
Our Online Tracker 33.99% 20.82% 5061 55914 805 3093 66.1 79.5
Our Offline Tracker 40.97% 18.97% 11479 45605 933 1093 68.2 79.4
Table 4: Comparison to the State-of-the-art Methods On MOT16 Rank List

Our ECCV 2016 Challenge results are listed in Table 4

. Obviously, both our online and offline trackers outperform the state-of-the-art approaches by a large margin. Note that our offline tracker achieves the best performance in FN. However, its performance in FP is moderate, due to the interpolation module.

7 Conclusion

In this submission, we take many efforts to get high performance detection and deep learning based appearance feature. We show that they lead to the state-of-the-art multiple object tracking results, even with very simple online tracker. One observation is that with high performance detection and appearance feature, the state-of-the-art offline tracker does not have expected advantages over the much simpler online one. This observation is not reported in many current MOT papers, which often use detections that are not good enough. We make our detections and deep learning based re-ID features on MOT2016 publicly available, and hope that they can help more sophisticated trackers to get better performance.

References

  • [1]

    Bell, S., Zitnick, C.L., Bala, K., Girshick, R.B.: Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. CoRR (2015)

  • [2] Dollár, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: A benchmark. In: CVPR (2009)
  • [3] Du, D., Qi, H., Li, W., Wen, L., Huang, Q., Lyu, S.: Online deformable object tracking based on structure-aware hyper-graph. TIP (2016)
  • [4] Ess, A., Leibe, B., Schindler, K., Gool, L.J.V.: A mobile vision system for robust multi-person tracking. In: CVPR (2008)
  • [5] Gidaris, S., Komodakis, N.: Object detection via a multi-region and semantic segmentation-aware CNN model. In: ICCV (2015)
  • [6] Kalman, R.E.: A new approach to linear filtering and prediction problems. Journal of Basic Engineering (1960)
  • [7] Kuhn, H.W.: The hungarian method for the assignment problem. Naval research logistics quarterly (1955)
  • [8] Li, W., Zhao, R., Xiao, T., Wang, X.: Deepreid: Deep filter pairing neural network for person re-identification. In: CVPR (2014)
  • [9] Li, W., Wen, L., Chuah, M.C., Lyu, S.: Category-blind human action recognition: A practical recognition system. In: ICCV (2015)
  • [10] Li, W., Wen, L., Chuah, M.C., Zhang, Y., Lei, Z., Li, S.Z.: Online visual tracking using temporally coherent part cluster. In: WACV (2015)
  • [11] Liu, H., Yang, X., Latecki, L.J., Yan, S.: Dense neighborhoods on affinity graph. IJCV (2012)
  • [12] Milan, A., Leal-Taixé, L., Reid, I.D., Roth, S., Schindler, K.: MOT16: A benchmark for multi-object tracking. CoRR (2016)
  • [13]

    Prosser, B., Zheng, W., Gong, S., Xiang, T.: Person re-identification by support vector ranking. In: BMVC (2010)

  • [14] Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)
  • [15] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.E., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
  • [16] Wen, L., Li, W., Yan, J., Lei, Z., Yi, D., Li, S.Z.: Multiple target tracking based on undirected hierarchical relation hypergraph. In: CVPR (2014)
  • [17] Zamir, A.R., Dehghan, A., Shah, M.: Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In: ECCV (2012)
  • [18] Zheng, L., Zhang, H., Sun, S., Chandraker, M., Tian, Q.: Person re-identification in the wild. CoRR (2016)