Towards Real-Time Multi-Object Tracking

09/27/2019 ∙ by Zhongdao Wang, et al. ∙ 0

Modern multiple object tracking (MOT) systems usually follow the tracking-by-detection paradigm. It has 1) a detection model for target localization and 2) an appearance embedding model for data association. Having the two models separately executed might lead to efficiency problems, as the running time is simply a sum of the two steps without investigating potential structures that can be shared between them. Existing research efforts on real-time MOT usually focus on the association step, so they are essentially real-time association methods but not real-time MOT system. In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model. Specifically, we incorporate the appearance embedding model into a single-shot detector, such that the model can simultaneously output detections and the corresponding embeddings. As such, the system is formulated as a multi-task learning problem: there are multiple objectives, i.e., anchor classification, bounding box regression, and embedding learning; and the individual losses are automatically weighted. To our knowledge, this work reports the first (near) real-time MOT system, with a running speed of 18.8 to 24.1 FPS depending on the input resolution. Meanwhile, its tracking accuracy is comparable to the state-of-the-art trackers embodying separate detection and embedding (SDE) learning (64.4 MOT-16 challenge). The code and models are available at https://github.com/Zhongdao/Towards-Realtime-MOT.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Multiple object tracking (MOT), which aims at predicting trajectories of multiple targets in video sequences, underpins critical application significance ranging from autonomous driving to smart video analysis.

The dominant strategy to this problem, i.e., the tracking-by-detection paradigm [mot16, poi, nomt], breaks MOT down to two steps: 1) the detection step, in which targets in single video frames are localized; and 2) the association step, where detected targets are assigned and connected to existing trajectories. It means the system requires at least two compute-intensive components: a detector and an embedding (re-ID) model. We term those methods as the Separate Detection and Embedding (SDE) methods for convenience. The overall inference time, therefore, is roughly the summation of the two components, and will increase as the target number increases. The characteristics of SDE methods bring critical challenges in building a real-time MOT system, an essential demand in practice.

Figure 1: Comparison between (a) the Separate Detection and Embedding (SDE) model, (b) the two-stage model and (c) the proposed Joint Detection and Embedding (JDE).

In order to save computation, a feasible idea is to integrate the detector and the embedding model into a single network. The two tasks thus can share the same set of low-level features, and re-computation is avoided. One choice for joint detector and embedding learning is to adopt the Faster R-CNN framework [faster], a type of two-stage detectors. Specifically, the first stage, the region proposal network (RPN), remains the same with Faster R-CNN and outputs detected bounding boxes; the second stage, Fast R-CNN [fast], can be converted to an embedding learning model by replacing the classification supervision with the metric learning supervision [personsearch, MOTS]. In spite of saving some computation, this method is still limited in speed due to its two-stage design and usually runs at fewer than 10 frames per second (FPS), far from real-time requirements. Moreover, the runtime of the second stage also increases as target number increases like SDE methods.

This paper is dedicated to the improving efficiency of an MOT system. We introduce an early attempt that Jointly learns the Detector and Embedding model (JDE) in a single-shot deep network. In other words, the proposed JDE employs a single network to simultaneously

output detection results and the corresponding appearance embeddings of the detected boxes. In comparison, SDE methods and two-stage methods are characterized by re-sampled pixels (bounding boxes) and features maps, respectively. Both the bounding boxes and feature maps are fed into a separate re-ID model for appearance feature extraction. Figure 

1 briefly illustrates the difference between the SDE methods, the two-stage methods and the proposed JDE. Our method is near real-time while being almost as accurate as the SDE methods. For example, we obtain a running time of 18.8 FPS with MOTA= on the MOT-16 test set. In comparison, Faster R-CNN + QAN embedding only runs at 6 FPS with MOTA= on the MOT-16 test set.

To build a joint learning framework with high efficiency and accuracy, we explore and deliberately design the following fundamental aspects: training data, network architecture, learning objectives, optimization strategies, and validation metrics. First, we collect six publicly available datasets on pedestrian detection and person search to form a unified large-scale multi-label dataset. In this unified dataset, all the pedestrian bounding boxes are labeled, and a portion of the pedestrian identities are labeled. Second, we choose the Feature Pyramid Network (FPN) [fpn]

as our base architecture and discuss with which type of loss functions the network learns the best embeddings. Then, we model the training process as a multi-task learning problem with anchor classification, box regression, and embedding learning. To balance the importance of each individual task, we employ task-dependent uncertainty 

[uncertainty]

to dynamically weight the heterogenous losses. Finally, we employ the following evaluation metrics. The average precision (AP) is employed to evaluate the performance of the detector. The retrieval metric True Accept Rate (TAR) at certain False Alarm Rate (FAR) is adopted to evaluate the quality of the embedding. The overall MOT accuracy is evaluated by the CLEAR metrics 

[CLEAR], especially the MOTA metric. This paper also provides a range of new settings and baselines for the joint learning detection and embedding task, which we believe will facilitate research towards real-time MOT.

The contributions of our work are summarized as follows,

  • We introduce JDE, a single-shot framework for joint detection and embedding learning. As an online MOT system, it runs in (near) real-time and is comparably accurate to the separate detection + embedding (SDE) state-of-the-art methods.

  • We conduct thorough analysis and experiments on how to build such a joint learning framework from multiple aspects including training data, network architecture, learning objectives and optimization strategy.

  • Experiments with the same training data show the proposed JDE performs as well as a range of strong SDE model combinations and achieves the fastest speed.

  • Experiments on MOT-16 demonstrate the advantage of our method over state-of-the-art MOT systems considering the amount of training data, accuracy and speed.

Related Work

Recent progresses on multiple object tracking can be primarily categorized into the following aspects:

  1. Ones that model the association problem as certain form of optimization problem on graphs [graph1, graph2, graph3].

  2. Ones that make efforts to model the association process by an end-to-end neural network 

    [e2e1, e2e2].

  3. Ones that seek novel tracking paradigm other than tracking-by-detection [tracktor].

Among them, the first two categories have been the prevailing solution to MOT in the past decade. In these tracking-by-detection methods, detection results and appearance embeddings are given as input, and the only problem to be solved is data association. Although some methods claim to be able to attain real-time speed, the runtime of the detector and appearance feature extraction is excluded, such that the overall system still has some distance from the claim. In contrast, in this work, we consider the runtime of the entire MOT system rather than the association step only. Achieving efficiecy on the entire system is more practically significant.

The third category attempts to explore novel MOT paradigms, for instance, incorporating single object trackers into the detector by predicting the spatial offsets [tracktor]. These methods are appealing owning to their simplicity, but tracking accuracy is not satisfying unless an additional embedding model is introduced. As such, the trade-off between performance and speed still needs improvement.

Our approach is also related to the person search task that aims to localize and recognize a query person from a large set of database frames. Some solution to this task is also to learn the person detector and embedding model jointly [personsearch]. Nevertheless, a major difference between MOT and person search systems is that MOT has more rigorous requirements on runtime, and thus approaches on person search cannot be directly borrowed.

Another line of related works is the Associative Embedding used in human pose estimation 

[ae] and detection [cornernet]

. A low-dimensional dense vector map named associative embedding is learned to group human joints or box corners. However, association is only applied in single images, while in MOT, association across different frames is needed, requiring the embedding to be more discriminative.

Figure 2: Illustration of (a) the network architecture and (b) the prediction head. Prediction heads are added upon multiple FPN scales. In each prediction head the learning of JDE is modeled as a multi-task learning problem. We automatically weight the heterogeneous losses by learning a set of auxiliary parameters, i.e., the task-dependent uncertainty.

Joint Learning of Detection and Embedding

Problem Settings

The objective of JDE is to simultaneously output the location and appearance embeddings of targets in a single forward pass. Formally, suppose we have a training dataset . Here, indicates an image frame, and represents the bounding box annotations for the targets in this frame. denotes the partially annotated identity labels, where indicates targets without an identity label. JDE aims to output predicted bounding boxes and appearance embeddings , where is the dimension of the embedding. The following objectives should be satisfied.

  • is as close to as possible.

  • Given a distance metric , that satisfy and , we have , where is a row vector from and are row vectors from , i.e., embeddings of targets in frame and , respectively,

The first objective requires the model to detect targets accurately. The second objective requires the appearance embedding to have the following property. The distance between observations of the same identity in consecutive frames should be smaller than the distance between different identities. The distance metric can be the Euclidean distance or the cosine distance. Technically, if the two objectives are both satisfied, even a simple association strategy, e.g., the Hungarian algorithm, would produce good tracking results.

Architecture Overview

We employ the architecture of Feature Pyramid Network (FPN) [fpn]. FPN makes predictions from multiple scales, thus bringing improvement in pedestrian detection where the scale of targets varies a lot. Figure 2 briefly shows the neural architecture used in JDE. An input video frame first undergoes a forward pass through a backbone network to obtain feature maps at three scales, namely, scales with , and down-sampling rate, respectively. Then, the feature map with the smallest size (also the semantically strongest features) is up-sampled and fused with the feature map from the second smallest scale by skip connection, and the same goes for the other scales. Finally, prediction heads are added upon fused feature maps at all the three scales. A prediction head consists of several stacked convolutional layers and outputs a dense prediction map of size , where is the number of anchor templates assigned to this scale, and is the dimension of the embedding. The dense prediction map is divided into three parts (tasks):

  • the box classification results of size ;

  • the box regression coefficients of size ; and

  • the dense embedding map of size .

In the following sections, we will detail how these tasks are trained.

Learning to Detect

In general the detection branch is similar to the standard RPN [faster], but with two modifications. First, we redesign the anchors in terms of numbers, scales, and aspect ratios to be able to adapt to the targets, i.e., pedestrian in our case. Based on the common prior, all anchors are set to an aspect ratio of . The number of anchor templates is set to such that for each scale, and the scales (widths) of anchors range from to . Second, we note that it is important to select proper values for the dual thresholds used for foreground/background assignment. By visualization we determine that an IOU0.5 w.r.t. the ground truth approximately ensures a foreground, which is consistent with the common setting in generic object detection. On the other hand, those boxes that have an IOU0.4 w.r.t. the ground truth should be regarded as background in our case rather than used in generic scenarios. Our preliminary experiment indicates that these thresholds effectively suppress false alarms, which usually happens under heavy occlusions.

The learning objective of detection has two loss functions, namely the foreground/background classification loss , and the bounding box regression loss . is formulated as a cross-entropy loss and as a smooth-L1 loss. The regression targets are encoded in the same manner as [faster].

Learning Appearance Embeddings

The second objective is a metric learning problem, i.e., learning a embedding space where instances of the same identity are close to each other while instances of different identities are far apart. To achieve this goal, an effective solution is to use the triplet loss [triplet]. The triplet loss has also been used in previous MOT works [MOTS]. Formally, we use triplet loss , where is an instance in a mini-batch selected as an anchor, represents a positive sample w.r.t. , and is a negative sample. The margin term is neglected for convenience. This naive formulation of the triplet loss has several challenges. The first is the huge sampling space in the training set. In this work we address this problem by looking at a mini-batch and mining all the negative samples and the hardest positive sample in this mini-batch, such that,

(1)

where is the hardest positive sample in a mini-batch.

The second challenge is that training with the triplet loss can be unstable and the convergence might be slow. To stabilize the training process and speed up convergence, it is proposed in [npair] to optimize over a smooth upper bound of the triplet loss,

(2)

Note that this smooth upper bound of triplet loss can be also written as,

(3)

It is similar to the formulation of the cross-entropy loss,

(4)

where we denote the class-wise weight of the positive class (to which the anchor instance belongs) as and weights of negative classes as . The major ditinctions between and are two-fold. First, the cross-entropy loss employs learnable class-wise weights as proxies of class instances rather than using the embeddings of instances directly. Second, all the negative classes participate in the loss computation in such that the anchor instance is pulled away from all the negative classes in the embedding space. In contrast, in , the anchor instance is only pulled away from the sampled negative instances.

In light of the above analysis, we speculate the performance of the three losses under our case should be . Experimental result in the experiment section confirms this. As such, we select the cross-entropy loss as the objective for embedding learning (hereinafter referred to as ).

Specifically, if an anchor box is labeled as the foreground, the corresponding embedding vector is extracted from the dense embedding map. Extracted embeddings are fed into a shared

fully-connected layer to output the class-wise logits, and then the cross-entropy loss is applied upon the logits. In this manner, embeddings from multiple scales shares the same space, and association across scales is feasible. Embeddings with label

, i.e., foregrounds with box annotations but without identity annotations, are ignored when computing the embedding loss.

Automatic Loss Balancing

The learning objective of each prediction head in JDE can be modeled as a multi-task learning problem. The joint objective can be written as a weighted linear sum of losses from every scale and every component,

(5)

where is the number of prediction heads and are loss weights. A simple way to determine loss weights are described below.

  1. Let , as suggested in existing works on object detection [faster]

  2. Let .

  3. Search for the remaining two independent loss weights for the best performance.

Searching loss weights with this strategy can yield decent results within several attempts. However, the reduction of searching space also brings strong restrictions on the loss weights, such that the resulting loss weights might be far from optimal. Instead, we adopt an automatic learning scheme for loss weights proposed in [uncertainty] by using the concept of task-independent uncertainty. Formally, the learning objective with automatic loss balancing is written as,

(6)

where is the task-dependent uncertainty for each individual loss and is modeled as learnable parameters. We refer readers to [uncertainty] for more detailed derivation and discussion.

Online Association

Although the association algorithm is not the focus of this work, here we introduce a simple and fast online association strategy to work in conjunction with JDE.

For a given video, the JDE model processes every frame and outputs bounding boxes and corresponding appearance embeddings. Accordingly, we compute the affinity matrix among embeddings of the observations and embeddings of a pool of previous existing tracklets. Observations are assigned to tracklets using the Hungarian algorithm. The Kalman filter is used to smooth the trajectories and predict the locations of previous tracklets in the current frame. If the assigned observation is spatially too far away from the predicted location, the assignment is rejected. Then, the embedding of a tracklet is updated as follows,

(7)

where indicates the embedding of the assigned observation, and indicates the embedding of the tracklet at timestamp . is a momentum term for smoothing, and we set . If none of the observations is assigned to a tracklet, the tracklet is marked as lost. The tracklet marked as lost will be removed from the current tracklet pool if the lost time is larger then a given threshold, or will be re-found in the assignment step.

Experiments

Datasets and Evaluation Metrics

Dataset ETH CP CT M16 CS PRW Total
img 2K 3K 27K 53K 11K 6K 54K
box 17K 21K 46K 112K 55K 18K 270K
ID - - 0.6K 0.5K 7K 0.5K 8.7K
Table 1: Statistics of the joint training set.

Performing experiments on small datasets may lead to biased results and conclusions may not hold when applying the same algorithm to large-scale datasets. Therefore, we build a large-scale training set by putting together six publicly available datasets on pedestrian detection, MOT and person search. These datasets can be categorized into two types: ones that only contain bounding box annotations, and ones that have both bounding box and identity annotations. The first category includes the ETH dataset [eth] and the CityPersons (CP) dataset [citypersons]. The second category includes the CalTech (CT) dataset [caltech], MOT-16 (M16) dataset [mot16], CUHK-SYSU (CS) dataset [personsearch] and PRW dataset [prw]. Training subsets of all these datasets are gathered to form the joint training set, and videos in the ETH dataset that overlap with the MOT-16 test set are excluded for fair evaluation. Table 1 shows the statistics of the joint training set.

For validation/evaluation, three aspects of performance need to be evaluated: the detection accuracy, the discriminative ability of the embedding, and the tracking performance of the entire MOT system. To evaluate detection accuracy, we compute average precision (AP) at IOU threshold of over the Caltech validation set. To evaluate the appearance embedding, we extract embeddings of all ground truth boxes over the validation sets of the Caltech dataset, the CUHK-SYSU dataset and the PRW dataset, apply retrieval among these instances and report the true positive rate at false accept rate (TPR@FAR=0.1). To evaluate the tracking accuracy of the entire MOT system, we employ the CLEAR metric [CLEAR], particularly the MOTA metric that aligns best with human perception. In validation, we use the MOT-15 training set with duplicated sequences with the training set removed. During testing, we use the MOT-16 test set to compare with existing methods.

Embed. Weighting Det Emb MOT
Loss Strategy AP TPR MOTA IDs
App.Opt 81.6 42.2 59.5 375
App.Opt 81.7 44.3 59.8 346
App.Opt 82.0 88.2 64.3 223
Uniform 6.8 94.8 36.9 366
MGDA-UB 8.3 93.5 38.3 357
Loss.Norm 80.6 82.1 57.9 321
Uncertainty 83.0 90.4 65.8 207
Table 2: Comparing different embedding losses and loss weighting strategies. TPR is short for TPR@FAR=0.1 on the embedding validation set, and IDs means times of ID switches on the tracking validation set. means the smaller the better; means the larger the better. In each column, the best result is in bold, and the second best is underlined.

Implementation Details

We employ DarkNet-53 [yolov3]

as the backbone network in JDE. The network is trained with standard SGD for 30 epochs. The learning rate is initialized as

and is decreased by 0.1 at the 15th and the 23th epoch. Several data augmentation techniques, such as random rotation, random scale and color jittering, are applied to reduce overfitting. Finally, the augmented images are adjusted to a fixed resolution. The input resolution is if not specified.

Experimental Results

Comparison of the three loss functions for appearance embedding learning. We first compare the discriminative ability of appearance embeddings trained with the cross-entropy loss, the the triplet loss and its upper bound variant, described in the previous section. For models trained with and , pairs of temporal consecutive frames are sampled to form a mini-batch with size . This ensures that there always exist positive samples. For models trained with , this sampling strategy is not necessary, and images are randomly sampled to form a mini-batch. Table 2 presents comparisons of the three loss functions.

As expected, outperforms both and . Surprisingly, the performance gap is large (+46.0/+43.9 TAR@FAR=0.1). A possible reason for the large performance gap is that the cross-entropy loss requires the similarity between one instance and its positive class be higher than the similarities between this instance and all the negative classes. This objective is more rigorous than the triplet loss family, which exerts constraints merely in a sampled mini-batch. Considering its effectiveness and simplicity, we use the cross-entropy loss for embedding learning in JDE.

Figure 3: The change of loss weights (left) and losses (right) in the Uncertainty strategy during training. The profiles of three scales are shown. Note that both the loss weights and the losses are in log-scale. The Uncertainty strategy automatically learns reasonable loss weights and benefits the multi-task learning process.

Comparison of different loss weighting strategies. The loss weighting strategy is crucial to learn good joint representation for JDE. In this paper, three loss weighting strategies are implemented. The first is a loss normalization method (named “Loss.Norm”), where the losses are weighted by the reciprocal of their moving average magnitude. The second is the “MGDA-UB” algorithm proposed in [MGDA] and the last is the weight-by-uncertainty strategy described in the previous section. Moreover, we have two baselines. The first trains all the tasks with identical loss weights, named as “Uniform”. The second, referred to as “App.Opt”, uses a set of approximate optimal loss weights by searching under the two-independent-variable assumption as described in the previous section. Table 2 summarizes the comparisons of these strategies. Two observations are made.

First, the Uniform baseline produces poor detection results, and thus the tracking accuracy is not good. This is because the scale of the embedding loss is much larger than the other two losses and dominates the training process. Once we set proper loss weights to let all tasks learn at a similar rate, as in the “App.Opt” baseline, both the detection and embedding tasks yield good performance.

Second, results indicate that the “Loss.Norm” strategy outperforms the “Uniform” baseline but is inferior to the “App.Opt” baseline. The MGDA-UB algorithm, despite being the most theoretically sound method, fails in our case because it assign too large weights to the embedding loss, such that its performance is similar to the Uniform baseline. The only method that outperforms the App.Opt baseline is the weight-by-uncertainty strategy. In Figure 3, we visualize the curves of losses and the learned weights with the Uncertainty method. We observe that although the loss weights are uniformly initialized, the Uncertainty method rapidly reduces the weight of the embedding loss to around and raises the loss weights of the other two tasks to a magnitude of . This is roughly consistent with the optimal weights in our App.Opt baseline (64:0.1), but since the loss weights are automatically learned, the “Uncertainty” strategy leads to higher tracking accuracy.

Figure 4: Comparing JDE and various SDE combinations in terms of tracking accuracy (MOTA) and speed (FPS). (a) shows comparisons under the case where the pedestrian density is low (MOT-15 train set), and (b) shows comparisons under the crowded scenario (MOT-CVPR19-01). Different colors represent different embedding models, and different shapes denote different detectors. We clearly observe that the proposed JDE method (JDE Embedding + JDE-DN53) has the best time-accuracy trade-off. Best viewed in color.
Method Det Emb box id MOTA IDF1 MT ML IDs FPSD FPSA FPS
DeepSORT_2 FRCNN WRN 429K 1.2k 61.4 62.2 32.8 18.2 781 15 17.4 8.1
RAR16wVGG FRCNN Inception 429K - 63.0 63.8 39.9 22.1 482 15 1.6 1.5
TAP FRCNN MRCNN 429K - 64.8 73.5 40.6 22.0 794 15 18.2 8.2
CNNMTT FRCNN 5-Layer 429K 0.2K 65.2 62.2 32.4 21.3 946 15 11.2 6.4
POI FRCNN QAN 429K 16K 66.1 65.1 34.0 21.3 805 15 9.9 6
JDE-864(ours) JDE - 270K 8.7K 62.1 56.9 34.4 16.7 1,608 34.3 81.0 24.1
JDE-1088(ours) JDE - 270K 8.7K 64.4 55.8 35.4 20.0 1,544 24.5 81.5 18.8
Table 3: Comparison with the state-of-the-art online MOT systems under the private data protocol on the MOT-16 benchmark. The performance is evaluated with the CLEAR metrics, and runtime is evaluated with three metrics: frames per second of the detector (FPSD), frame per second of the association step (FPSA), and frame per second of the overall system (FPS). indicates estimated timing. We clearly observe our method has the best efficiency and a comparable accuracy.

Comparison with SDE methods. To demonstrate the superiority of JDE to the Separate Detection and Embedding (SDE) methods, we implemented several state-of-the-art detectors and person re-id models and compare their combinations with JDE in terms of both tracking accuracy (MOTA) and runtime (FPS). The detectors include JDE with ResNet-50 and ResNet-101 [resnet] as backbone, Faster R-CNN [faster] with ResNet-50 and ResNet-101 as backbone, and Cascade R-CNN [cascade] with ResNet-50 and ResNet-101 as backbone. The person re-id models include IDE [ide], Triplet [indefense] and PCB [pcb]. In the association step, we use the same online association approach described in the previous section for all the SDE models. For fair comparison, the training data used by these SDE models are the same as JDE.

In Figure 4, we plot the MOTA metric against the runtime (frame per image) of SDE combinations of the above detectors and person re-id models. Runtime of all models are tested on a single Nvidia Titan xp GPU. Figure 4 (a) shows comparisons on the MOT-15 train set, in which the pedestrian density is low. In contrast, Figure 4 (b) shows comparisons on a video sequence that contains crowd in high-density (CVPR19-01 from the CVPR19 MOT challenge datast). Several observations can be made.

First, the proposed JDE runs very fast and meanwhile produces competitive tracking accuracy, reaching the best trade-off between accuracy and speed. Specifically, JDE with DarkNet-53 (JDE-DN53) runs at 22 FPS and produces tracking accuracy nearly as good as the combination of the Cascade RCNN detector with ResNet-101 (Cascade-R101) + PCB embedding, while the latter only runs at 6 FPS.

Second, the tracking accuracy of JDE is very close to the combinations of JDE+IDE, JDE+Triplet and JDE+PCB (see the cross markers in Figure 4). This suggests the jointly learned embedding is almost as discriminative as the separately learned embedding.

Finally, comparing the runtime of a same model between Figure 4 (a) and (b), it can be observed that all the SDE models suffer a significant speed drop under the crowded case. This is because the runtime of the embedding model increases with the number of detected targets. This drawback does not exist in JDE because the embedding is computed together with the detection results. As such, the runtime difference between JDE under the usual case and the crowded case is much smaller (see the red markers). In fact, the speed drop of JDE is due to the increased time in the association step, which is positively related to the target number.

Comparison with the state-of-the-art MOT systems. Since we train JDE using additional data instead of the MOT-16 train set, we compare JDE under the “private data” protocol of the MOT-16 benchmark. State-of-the-art online MOT methods under the private protocol are compared, including DeepSORT_2 [deepsort], RAR16wVGG [rar], TAP [tap], CNNMTT [cnnmtt] and POI [poi]. All these methods employ the same detector, i.e., Faster-RCNN with VGG-16 as backbone, which is trained on a large private pedestrian detection dataset. The main differences among these methods reside in their embedding models and the association strategies. For instance, DeepSORT_2 employs Wide Residual Network (WRN) [wrn] as the embedding model and uses the MARS [mars] dataset to train the appearance embedding. RAR16withVGG, TAP, CNNMTT and POI use Inception [inception], Mask-RCNN [maskrcnn], a 5-layer CNN, and QAN [qan] as their embedding models, respectively. Training data of these embedding models also differ from each other. For clear comparison, we list the detectors, embedding models, and number of training data for all these methods in Table 3. Accuracy and speed metrics are also presented.

Considering the overall tracking accuracy, e.g., the MOTA metric, JDE is generally comparable. Our result is higher than DeepSort_2 by +3.0% and is lower than POI by 1.7%. In terms of running speed, it is not feasible to directly compare these methods because their runtimes are not all reported. Therefore, we re-implemented the VGG-16 based Faster R-CNN detector and benchmark its running speed, and then estimate the running speed upper bounds of the entire MOT system for these methods. Note that for some methods the runtime of the embedding model is not taken into account, so the speed upper bounds are far from being tight. Even with such relaxed upper bound, the proposed JDE runs at least faster than existing methods, reaching a near real-time speed, i.e., 18.8 FPS at an image resolution of as high as . When we down-sample the input frames to a lower resolution of , the runtime of JDE can be further sped up to 24.1 FPS with only a minor performance drop ( = -2.6% MOTA).

Analysis and discussions. One may notice that JDE has a lower IDF1 score and more ID switches than existing methods. At first we suspect the reason is that the jointly learned embedding might be weaker than a separately learned embedding. However, when we replace the jointly learned embedding with the separately learned embedding, the IDF1 score and the number of ID switches remain almost the same. Finally we find that the major reason lies in the inaccurate detection when multiple pedestrians have large overlappings with each other. Figure 5 shows such a failure case of JDE. Such inaccurate boxes introduce lots of ID switches, and unfortunately, such ID switches often occur in the middle of a trajectory, hence the IDF1 score is lower. In our future work, it remains to be solved how to improve JDE to make more accurate boxes predictions when pedestrian overlappings are significant.

Figure 5: Failure case analysis. Inaccurate detection results when pedestrian have large overlappings give rise to ID switches. Best viewed in color.

Conclusion

In this paper, we introduce JDE, an MOT system that allows target detection and appearance features to be learned in a shared model. Our design significantly reduces the runtime of an MOT system, making it possible to run at a (near) real-time speed. Meanwhile, the tracking accuracy of our system is comparable with the state-of-the-art online MOT methods. Moreover, we have provided thorough analysis, discussions and experiments about good practices and insights in building such a joint learning framework. In the future, we will investigate deeper into the time-accuracy trade-off issue.

References