Multiple object tracking (MOT), which aims to extract trajectories of multiple moving objects in a video sequence, is a crucial step in video understanding. A robust and reliable MOT system is the basis for a wide range of practical applications including video surveillance, autonomous driving, and sports video analysis. To construct an automatic tracking system, most effective MOT approaches, e.g., [32, 67, 7, 9, 30, 62, 27, 63, 58, 15], require a pre-trained detector, e.g., [22, 18, 26, 61, 11, 46] to discover the target objects in the video frames (usually in the form of their bounding boxes). As such, a general MOT system entails an object detection step to find target locations in each video frame, and a object tracking step that generates target trajectories across video frames 111We make the explicit definition of object tracking method and MOT system, i.e., MOT system detection tracking. Notably, the terminology “object tracking” refer in particular to multi-object tracking in this work..
Despite significant advances in recent years, relatively less effort has been made to large-scale and comprehensive evaluations of MOT methods considering both the object detection and tracking steps. Current MOT evaluation methods usually separate the evaluation of object detection (e.g., [21, 19, 25, 48]) and object tracking steps (e.g., [23, 6, 25, 41, 35]). These works have shown the effect of various aspects of the performance of object tracking step, such as appearance ambiguity among targets and occlusions, and provide important information that allows us to better understand object tracking methods. Furthermore, a recent study 
also shows the importance of ground truth annotation and evaluation metrics in analyzing object tracking methods.
However, here we argue that, the performance of object tracking step can only be revealed by evaluating the performance of the overall MOT system. Thus, existing object tracking performance evaluation methodologies usually use fixed object detection results as input to exclude the effect of object detection step. While this evaluation strategy is widely adopted in the current literature and has yielded some useful insights on object tracking methods, it is insufficient for fully analyzing complete MOT systems (see Figure 1). In particular, it is important to understand the effect of detection accuracy on the overall MOT system performance. Such can only be revealed in a comprehensive quantitative study on object detection and object tracking steps jointly.
In this work, we perform such a study on the basis of a new and large scale MOT evaluation dataset, University at Albany DETection and tRACking (UA-DETRAC) dataset. The UA-DETRAC dataset includes challenging video sequences corresponding to more than frames of real-world traffic scenes. These video sequences are manually annotated with more than million labeled bounding boxes of vehicles and some useful attributes, e.g., weather of scenes, vehicle category, occlusion, etc. To the best of our knowledge, this is the largest and most thoroughly annotated MOT evaluation dataset to date (see Table 1 for a detailed comparison with other benchmark datasets), and it poses new challenges for object detection and tracking algorithms. We evaluate the complete MOT systems constructed from combinations of six object tracking schemes [3, 45, 4, 17, 58, 5] and four object detection methods [22, 18, 26, 11].
Our analysis on the UA-DETRAC dataset reveals some interesting and previously unnoticed observations. In particular, previous MOT evaluations use fixed object detections to compare different object tracking methods, but our experimental results (see Figure 1) show that the relative rankings of the corresponding MOT systems vary significantly with different settings of object detections. Furthermore, some object tracking methods perform more robustly over different object detection qualities than other object tracking methods. As such, using a fixed object detection results is not sufficient to reveal the full behavior of the subsequent object tracking step, and can lead to biased evaluations and conclusions.
Based on these observations, we further propose a new evaluation protocol with metrics for object tracking methods and MOT systems. The proposed UA-DETRAC evaluation protocol considers object detection and object tracking in tandem. One recent work  also addresses the issue of MOT performance evaluation with fixed detection results. Similar to our work, this work reveals the inadequacy of using fixed detection inputs adopted in the current object tracking evaluation strategy, and suggests to study object tracking methods by using multiple noise perturbed detection results from the ground truth annotations. However, evaluating with artificially perturbed detections do not well reflect the behavior of an object detector in practice. In contrast, our analysis is based on the actual outputs of the state-of-the-art object detectors with full range of their precision-recall rates. From this perspective, our analysis and MOT system evaluation strategy provide a closer description of how a complete MOT system performs in practice.
The main contributions of this work are summarized as follows. (1) We present a large-scale UA-DETRAC dataset for both vehicle detection and MOT evaluations, which is distinctly different from existing databases in terms of data volume, annotation quantity and quality, and difficulty (see Table 1). (2) We propose a new protocol and evaluation metrics for object tracking methods and MOT systems by considering detection and tracking steps jointly. (3) Our benchmark dataset serves three different evaluation scenarios, i.e., object detection, object tracking and complete MOT system evaluations, which is useful to promote the development of object detection and tracking fields. (4) Based on the UA-DETRAC dataset and evaluation protocol, we perform comprehensive performance evaluations of complete MOT systems by combining the state-of-the-art detection and tracking algorithms, and analyze conditions under which the existing methods may fail.
2 UA-DETRAC Benchmark
The UA-DETRAC dataset consists of video sequences, which are selected from over hours of videos taken with a Cannon EOS 550D camera at different locations, representing various common traffic types and conditions including urban highway, traffic crossings and T-junctions. The videos are recorded at frames per seconds (fps), with the JPEG image resolution of pixels. A web site 222http://detrac-db.rit.albany.edu. is available for performance evaluation in a way similar to the Middleburry stereo dataset  and MOT15 benchmark  with similar submission protocol.
2.1 Data Collection and Annotation
Properties and Annotation. We manually annotate more than frames in the UA-DETRAC dataset with vehicles, leading to a total of million labeled bounding boxes of vehicles. Similar to the PASCAL VOC , we label some “ignore” regions, which include vehicles that cannot be annotated due to low resolution. In Fig. 2, we present several examples of the annotated video frames with detail attributes in the UA-DETRAC dataset.
The UA-DETRAC dataset is divided into training (UA-DETRAC-train) and testing (UA-DETRAC-test) sets, with and sequences, respectively. We deliberately select training videos that are taken at different locations from the testing videos, but ensure the training and testing videos share similar traffic conditions and attributes. This setting reduces the chances of the object detector or object tracking method to overfit to particular scenarios, while still ensures generalization from training to testing phases. All the benchmarked detection and tracking algorithms are trained on the UA-DETRAC-train set and evaluated on the UA-DETRAC-test set.
The UA-DETRAC dataset contains videos with large variations in scale, pose and illumination, occlusion, and background clutters. Similar to KITTI-D  and WIDER FACE , we define three level of difficulties in the UA-DETRAC-test set, i.e., easy ( sequences), medium ( sequences), and hard: ( sequences), based on the detection rate of the EdgeBox method , as shown in Figure 3. The average recall rates of these three levels are , , and , respectively, with proposals per frame.
On the other hand, since the difficulty of the object tracking step may not be consistent to the detection task, similar to MOT15 , we label the sequences in terms of object tracking step easy ( sequences), medium ( sequences), and hard ( sequences), based on the average PR-MOTA scores (defined in Section 3.2) of six benchmarked object tracking methods, i.e., GOG , CEM , DCT , IHTLS , H2T , and CMOT , as shown in Figure 4.
Moreover, to analyze the performance of object detection and tracking algorithms in details, we also annotate several attributes:
Vehicle category. We annotate four types of vehicles as, i.e., car, bus, van, and others. The distribution of vehicle category is shown in Figure 5(a).
Weather. We consider four categories of weather conditions, i.e., cloudy, night, sunny, and rainy. The distribution of the weather attribute is presented in Figure 5(b).
Scale. We define the scale of the annotated vehicle bounding boxes as the square root of their area in pixels. The distribution of vehicle scale in the dataset is presented in Figure 5(c). We group vehicles into three scales according to the vehicle scale: small (- pixels), medium (- pixels), and large (more than pixels).
Occlusion ratio. We use the fraction of vehicle bounding box being occluded to define the occlusion. The distribution of occluded vehicles is shown in Figure 5
(d), and classify the occlusion into three categories:no occlusion, partial occlusion, and heavy occlusion. Specifically, we define the partial occlusion, if the occlusion ratio of a vehicle is between , and the heavy occlusion, if the occlusion ratio is larger than .
Truncation ratio. The truncation ratio indicates degree of vehicle parts is outside the frame, which is used in training data selection. We discard any sample with the truncation ratio larger than for training.
2.2 Comparison with Existing Datasets
2.2.1 Object Detection Datasets
Several benchmarks exist for object detections, e.g., Pascal VOC 
, ImageNet, Caltech , KITTI-D , and KAIST , which makes great contributions to promote the development of object detection field. However, the main focus of these benchmarks is object detection, and they provide useful baseline results when detectors are incorporated in MOT systems.
2.2.2 Object Tracking Datasets
Several multi-object tracking benchmarks have also been collected for evaluating the state-of-the-art object tracking methods. Some of the most widely used multi-object tracking benchmarks are the PETS09 , KITTI-T , MOT15  and MOT16 . The PETS09 dataset is a large crowd dataset that focuses on multi-pedestrian tracking and counting. The KITTI-T benchmark is a multi-vehicle tracking dataset, taken from a moving vehicle with the viewpoint of the driver. The KITTI-T and KITTI-D benchmarks are for object tracking and detection separately. The MOT15 benchmark aims to provide a unified dataset, platform, and evaluation protocol for existing object tracking methods. It includes a dataset of video sequences mostly from surveillance cameras with the tracking targets of interest being pedestrians. In addition, it also provides an open system where new datasets and multi-object tracking methods can be incorporated in a plug-n-play manner. The MOT16 benchmark improves on the MOT15 benchmark by adding more challenging data with thorough annotation. Compared to existing multi-object tracking benchmarks, the UA-DETRAC benchmark is designed for another practical scenario of object detection and MOT applications, i.e., vehicle surveillance, with a significantly larger number of video frames, annotated bounding boxes and attributes. Meanwhile, the UA-DETRAC benchmark is designed to evaluate object detection, object tracking and MOT system simultaneously, rather than only for the object tracking step. Table 1 presents a summary of the differences between existing and proposed UA-DETRAC benchmarks in various aspects.
2.3 Object Detection Algorithms
We present a brief survey of the state-of-the-art in object detection field, and then describe four benchmarked detection algorithms in our dataset.
2.3.1 Survey of Existing Object Detection Methods
We focus on computer vision algorithms for detecting objects in individual image frames. Papageorgiouet al. 
presented one of the first sliding window based detection systems for object detection in unconstrained scenes using support vector machine (SVM) to the multi-scale Haar wavelet features. Viola and Jones
built upon this system using a cascade AdaBoost learning algorithm with Haar feature to complete the face detection task effectively and efficiently. To achieve robust performance, Zhanget al.  proposed a new discriminative feature, called multi-block local binary pattern (MB-LBP), to represent facial image, which captures more image structure information than Haar-like features.
Gradient features are important cue for detection. Dalal and Triggs  popularized the histogram of oriented gradient (HOG) for detection, which significantly outperformed existing feature sets. To improve the performance for human detection in films and videos,  designed a new descriptor to capture the relative motion of different limbs while resisting background motions by combing the optical flow with the histogram of oriented gradient descriptor in . Large gains also come with the adoption of other kinds of features. Dollàr et al.  extended the Haar-like feature over multiple channels of image data, including gradient magnitude, gradient magnitude quantized by orientation, and LUV color channels, providing a simple and efficient pedestrian detection framework based on fast feature pyramids.
Feature representation is one of the core steps in object detection. In contrast to previous hand-crafted features, e.g., Haar , HOG , MB-LBP , to name a few, the learning based features (e.g., CNN features ) became popular in recent years because of their outstanding performances. Girshick et al. 
proposed R-CNN, which is a general object detection strategy that combines region proposals with convolutional neural networks (CNNs), and showed that it achieved dramatically higher detection performance on PASCAL VOC as compared to systems based on HOG-like features. Caiet al.  proposed a new cascade learning approach by formulating cascade learning as the Lagrangian optimization of a risk accounting for both accuracy and complexity, solved by a boosting algorithm. Redmon et al. 
formulated object detection as a regression problem to predict bounding boxes and associated class probabilities by a single neural network from full images in one evaluation. This approach was extremely fast and can be optimized end-to-end directly on detection performance. The aforementioned approaches gained considerable improvements on performance from the CNNs features.
As one of the most successful general object detection strategy, Felzenszwalb et al.  presented a discriminative part based approach, which modeled the part positions as latent variables in an support vector machine (SVM) framework. Park et al.  extended this strategy and described a multi-resolution model acting as a deformable part-based model (DPM) when handling large instances and a rigid template with handling small instances. However, the speed was the bottleneck of DPM in real applications. To that end, Yan et al.  accelerated DPM by constraining the root filter to be low rank, designing a neighborhood aware cascade to capture the dependence in neighborhood regions for aggressive pruning, and constructing look-up tables to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. In this way, the speed of DPM can be greatly improved while achieving similar accuracy.
Considerable effort has also been devoted to improve the object proposal generation. Lampert et al.  improved the traditional sliding window based proposal generation strategy using a branch-and-branch scheme to efficiently maximize the classifier function over all possible sliding subwindows based on bag-of-words image representation [51, 12]. Sande et al.  proposed a selective search strategy using segmentation to generate limited but precise set of locations. Zitnick and Dollàr  observed that the number of contours contained in a bounding box is indicative of the likelihood of the box containing an object and proposed a simple box objectness score to guide the object proposal generation. This strategy generated accurate proposals with high efficiency.
2.3.2 Evaluated Object Detectors
We evaluate four state-of-the-art object detection algorithms in the proposed benchmark 333All source codes of the detection algorithms are publicly available or provided by the authors., including DPM , ACF , R-CNN , and CompACT . We retrain these methods on the UA-DETRAC-train dataset and evaluate their performance on the UA-DETRAC-test set. The DPM method is trained using a mixture of star-structured models, each having latent orientations. The ACF cascade uses decision trees of depth . For the CompACT scheme, we train a cascade of decision trees of depth , using all handcrafted features in  except CNN features. For the ACF and CompACT methods, the template size is set to . To detect vehicles with different aspect ratios, the original images are resized to six different aspect ratios before scanned by the detectors, such that only a single model is needed. A bounding box regression model based on the ACF features is trained for the ACF and CompACT detectors to get better detection performance. For the R-CNN algorithm, we fine-tune the AlexNet  on the UA-DETRAC-train dataset. Instead of using selective search to generate proposal, the output bounding boxes of the ACF method are warped to pixels and then input into the R-CNN framework for classification. The bounding box regression is not used in the R-CNN method. The positive samples are all types of vehicles from the UA-DETRAC-train dataset, while the KITTI-D dataset  is used for hard negative mining. The minimum size of the detected object is set to pixels for all detectors.
2.4 Object Tracking Algorithms
We briefly review the multi-object tracking algorithms, and then describe six benchmarked state-of-the-art object tracking approaches in details.
2.4.1 Survey of the Existing MOT Methods
Many recent effective multi-target tracking algorithms are based on the tracking-by-detection framework, which formulate tracking as a target association problem, i.e., the input frame detections are linked by the trackers based on their similarities in appearance and motion to form long tracks. The typical methods are Joint Probabilistic Data Association Filter (JPDAF)  and Multiple Hypotheses Tracking (MHT) . JPDAF  method solved the matching problem between the tracked targets and detections in each frame by a probabilistic approach, while the MHT  method evaluated the likelihoods of the hypothesized matches over several time steps. Undeniably, more frames considered jointly will improve the performance of the MHT method comparing with the JPDAF method. However, the solution space grown exponentially with the number of considered frames of the MHT method, which made MHT not efficient to handle long-term association.
Various algorithms consider associations of detection/tracklet pairs as an optimization task based on K-shortest path (KSP) , maximum weight independent sets , maximum multi-clique optimization 
, tensor power iterations, network flows [67, 45]31], Hungarian algorithm , generalized linear assignment optimization , and subgraph decomposition . To exploit the motion information of targets, Wen et al.  formulated the multi-object tracking task as the dense structures exploiting on a hypergraph, whose nodes are detections and hyper-edges encodes the high-order relations among detections. After that,  solved the speed bottleneck of  to make the tracker run in real-time using a RANSAC-style approach to extract the dense structures on hypergraph efficiently. Milan et al.  formulated multi-object tracking as the energy minimization problem, taking into account physical constraints, such as target dynamics, mutual exclusion, and track persistence.  extended this approach and proposed to formulate multi-object tracking as a discrete-continuous optimization problem that integrated data association and trajectory estimation into a consistent energy, which was similarly solved by the approach in .
2.4.2 Evaluated Object Trackers
We evaluate performance of different object tracking methods and MOT systems on the UA-DETRAC dataset. Notably, these MOT systems are constructed by the combinations of four state-of-the-art object detection algorithms, including DPM , ACF , R-CNN , and CompACT , and six object tracking algorithms 444All source codes of the object detection and tracking algorithms are publicly available or provided by the authors., including GOG , CEM , DCT , IHTLS , H2T , and CMOT . All these methods take object detection results in each frame as the input and generate target trajectories to complete tracking task. We use the UA-DETRAC-train set to determine the parameters for these methods, and the UA-DETRAC-test set for evaluation.
3 UA-DETRAC Evaluation Protocol
As discussed in Section 1, existing multi-object tracking evaluation protocols that use the same set of object detections as input are not adequate to fully understand overall MOT system performance. In this section, we introduce a new MOT evaluation protocol that considers object detection and tracking jointly. We first describe the evaluation protocol for object detection task in UA-DETRAC benchmark.
3.1 Evaluation Protocol for Object Detection
Evaluation metric. We use the precision vs. recall
(PR) curve for object detection. The PR curve is generated by changing the threshold of an object detector to generate different precision and recall values. Per-frame detector evaluation is performed as in the KITTI-D benchmark, with the hit/miss threshold of the overlap between a detected bounding box and a ground truth bounding box set to .
Detection ranking. The average precision (AP) score of the PR curve is used to indicate the performance, i.e., the larger AP score indicates the better performance of object detection algorithm. The performance of the evaluated detection algorithms are presented in Figure 7(a).
3.2 Evaluation Protocol for Object Tracking
Evaluation metric. We first introduce a set of performance evaluation metrics for object tracking used in previous literatures, including mostly tracked (MT), mostly lost (ML), identity switches (IDS), fragmentations of target trajectories (FM), false positives (FP), false negatives (FN), and two CLEAR MOT metrics , multi-object tracking accuracy (MOTA) and multi-object tracking precision (MOTP). The FP metric describes the number of tracker outputs which are the false alarms, and FN is the number of targets missed by any tracked trajectories in each frame. The IDS metric describes the number of times that the matched identity of a tracked trajectory changes, while FM is the number of times that trajectories are disconnected. Both IDS and FM metrics reflect the accuracy of tracked trajectories. The ML and MT metrics measure the percentage of tracked trajectories less than and more than of the time span based on the ground truth respectively. The MOTA metric for all sequences in the benchmark is defined as , i.e.,
where is the false negatives, is the false positives, with the hit/miss threshold of the bounding box overlap between an output trajectory and the ground truth set to be . In addition, is the identity switches of trajectories, and is the number of ground truth objects at time index of sequence . The MOTP metric is the average dissimilarity between all true positives and their corresponding ground truth targets, as the average overlap between all correctly matched hypotheses and their respective objects. Notably, the MOTA score is calculated by the FN, FP and IDS scores. Directly comparing the MOTA scores is equivalent to compare two sets of FN, FP and IDS scores. Thus, it is inappropriate to judge the performance of MOT systems based on the MOTA score, even along with other metrics, e.g., MOTP, FP, FN, etc.
As discussed above, it is necessary to consider object detection and tracking jointly in evaluation. Thus, we introduce the UA-DETRAC metrics, i.e., PR-MOTA, PR-MOTP, PR-MT, PR-ML, PR-IDS, PR-FM, PR-FP, and PR-FN scores, based on the basic evaluation metrics by considering the effect of detection modules. First, we take the basic evaluation metric MOTA as an example to describe the way to calculate the PR-MOTA score. The PR-MOTA curve (see Figure 6) is a three dimension curve characterizing the relation between object detection performance (precision and recall) and object tracking performance (MOTA). In the following, we describe the steps to create a PR-MOTA curve and scores.
We first vary the detection threshold 555Specifically, we vary the threshold times from the minimal to the maximal scores of input detections to generate the PR-MOTA curve. gradually to generate different object detections (bounding boxes) corresponding to different values of precision and recall . The two dimension curve corresponding to is the precision-recall (PR) curve that delineates the region of possible PR values of object detection.
For a particular set of object detections determined by , we apply an object tracking algorithm and compute the resulting MOTA score . The MOTA scores for values on the PR curve form a three dimension curve, i.e., the PR-MOTA curve, as shown in Figure 6.
From the PR-MOTA curve, we calculate the integral score to measure multi-object tracking performance (see Figure 6), i.e., the PR-MOTA score ( is the line integral along the PR curve ). In other words, corresponds to the (signed) area of the curved surface formed by the PR-MOTA curve along the PR curve, as shown by the shaded area in Figure 6.
Using the scores , we can compare different multi-object tracking algorithms by integrating the effect of object detections 666Notably, we have . The proofs about the range of the score can be found in the Appendix.. The score of all combinations of the benchmarked detection and tracking algorithms are presented in Table 2, which reflects the overall performance of the MOT systems. The scores of other seven metrics, e.g., PR-MOTP and PR-IDS, are similarly calculated. In this way, we can use the calculated scores to rank different object tracking methods and complete MOT systems.
MOT system ranking.
We rank different MOT systems based on the PR-MOTA scores in evaluation, i.e., larger PR-MOTA score indicates better MOT performance. The tracking results of all MOT systems constructed by combinations of four object detection and six object tracking methods benchmarked on UA-DETRAC dataset are shown in Table 2.
Object tracking ranking.
As presented in Figure 1, different detection algorithms greatly affect the overall performance of the MOT systems. A robust object tracking method is expected to perform well even with different detection algorithms. Thus, the average scores of UA-DETRAC metrics over four detection methods (i.e., DPM , ACF , R-CNN , and CompACT ) are used to rank the object tracking method. We use the average PR-MOTA scores based on four detection methods to compare different object tracking methods, i.e., larger average PR-MOTA scores correspond to higher ranking of the object tracking methods. The results are presented in Table 3.
3.3 Comparison with Existing Evaluation Protocols
Evaluating MOT methods is a complex issue, the analysis in  shows that the widely used metrics in the literature [53, 37], such as the MOTA, MOTP or IDS scores, do not fully describe how the whole systems perform. Furthermore, there are several issues with the associated MOT evaluation protocols. Early multi-object tracking studies [65, 4] use different object detection methods to generate inputs to different object tracking methods. It is known that the arbitrary choice of object detection inputs affects the MOT results. Most recent MOT evaluation works (e.g., [27, 58, 35, 40]) adopt a different protocol that uses the same fixed detection inputs to different object tracking methods, in order to make the MOT evaluation independent from the variations in object detection results. It has been shown in  that the performance MOT systems cannot be reflected clearly with fixed detection inputs, and multiple synthetic detections generated by controlled noise are used for comparisons. However, these synthetically generated detection results do not fully correspond to how detectors perform in real images. Furthermore, in , the detections are randomly perturbed independently for each frame, which is different from real detectors that generate correlated detections in consecutive frames. In contrast, the UA-DETRAC protocol considers the detector performance for MOT evaluation. By using the three dimension curves of detection (PR) and tracking scores (e.g., MOTA, MOTP, etc.), the UA-DETRAC protocol can better reflect the behavior of MOT systems, and is more useful for evaluating all components of a MOT system.
4 Analysis and Discussion
4.1 Object Detection
Overall performance. Significant progress has been made in object detection. However, the results of four state-of-the-art object detectors on the UA-DETRAC dataset, presented in Figure 7(a) with the PR curves, show that there is still much room for improvement for object detectors. Specifically, as shown by the PR curves in Figure 7(a), the DPM and ACF methods do not perform well on vehicle detection, i.e., produce only and
AP scores. The deep learning based R-CNN method performs slightly better than the ACF method withAP. Overall, the most recent CompACT  algorithm achieves the best performance among the four detectors with AP. As shown in Figure 7(b)-(d), from the easy to hard sets, the AP scores of four detectors drops more than , and the AP score for the classic DPM detector is only on the hard set, which indicates that much improvement is necessary for detectors to be used in challenging scenarios described in the UA-DETRAC dataset.
Weather. Weather conditions, such as rainy and night, significantly affect the performance of detectors, rendering the highest AP below , shown in Figure 8. Existing object detectors do not perform well when the appearance changes caused by pool lighting conditions are significant, e.g., the best CompACT method achieves only AP at night. In contrast, object detectors perform relatively well in cloudy and sunny days.
Vehicle category. As shown in Figure 9(a)-(d), for different category of vehicles, the detectors only perform relatively well on cars. The AP of the best R-CNN method is less than for buses. It can be attributed to two reasons. First, it is difficult to handle the drastic variations of scale and aspect ratio for the bus images. Furthermore, limited training samples affects the performance of object detectors (i.e., there are only vehicles are buses in the training set, see Figure 5(a)).
Scale. Figure 10(a)-(d) show the results for each scale of vehicles in the UA-DETRAC-test set. For small scale vehicles, most detectors achieve over AP except the DPM method. At the medium scale, the CompACT method achieves the best performance with AP. All algorithms perform poorly for the large scale vehicles (less than AP). This phenomenon indicates that current detectors are incapable to deal with large scale vehicles, e.g., buses.
Occlusion ratio. In Figure 11(a)-(d), we show the influence of occlusion on detection performance in three categories, i.e., no occlusion, partial occlusion, and heavy occlusion, as described in Section 2.1. When partial occlusion occurs (occlusion ratio is between ), the performance drops significantly (more than AP). Moreover, when heavy occlusion occurs (occlusion ratio is over ), the AP scores of all detectors are less than . Thus, there is significant room for improvement on detecting vehicles under heavy occlusion.
4.2 MOT System
The UA-DETRAC scores of the MOT systems constructed by four object detection and six object tracking methods are presented in 2. As shown in Table 2, all tracking systems performs disappointedly (i.e., the best PR-MOTA score is below (Perfect , see Section 3.2)). The MOT system CompACTGOG obtains the relatively higher PR-MOTA scores () than other systems.
The CompACTH2T ( PR-MOTA score), and CompACTIHTLS ( PR-MOTA score) systems achieve comparable performance, while DPM CMOT performs worst among all systems with the lowest PR-MOTA score . After analyzing the results in Table 2, we have two conclusions, i.e., (1) the general trend is that a complete MOT system achieves better performance with better detections, e.g., the average PR-MOTA scores of all object tracking methods with the DPM, ACF, R-CNN, CompACT detectors are , , , and , respectively; (2) however, there also exists some counter-examples, e.g., the MOT system ACFCEM ( PR-MOTA) performs better than R-CNNCEM ( PR-MOTA) (R-CNN performs better than ACF), and the MOT system R-CNNDCT ( PR-MOTA) performs better than CompACTDCT ( PR-MOTA) (CompACT performs better than R-CNN). These results suggest that it is important to choose the appropriate detector for each object tracking algorithm when constructing an MOT system. On the other hand, these results indicate that we should use multiple different detectors to evaluate object tracking methods comprehensively and fairly, rather than select one specific detector.
4.3 Object Tracking
The tracking results of six object tracking methods in UA-DETRAC benchmark in different subsets: overall (Table 3); easy, medium, and hard (Table 5); cloudy, rainy, sunny, and night (Table 4); of UA-DETRAC benchmark, are reported.
Based on the previous discussion, we combine each object tracking method with all the detectors and report the average performance. As presented in Table 3 that the GOG, DCT, and H2T algorithms produce top three PR-MOTA scores, i.e., , , and , respectively, while the CEM method perform worst with the lowest PR-MOTA score . In the easy set, the GOG algorithm performs well with average PR-MOTA score, while H2T has comparable average PR-MOTA score . However, in the hard set, the performances of all object tracking methods are poor, e.g., the best average PR-MOTA score is only . The average PR-MOTA scores of three object tracking methods, i.e., CMOT, IHTLS, and H2T are even less than .
In addition, as presented in Table 3, we find that the CMOT, H2T, DCT, and IHTLS methods perform well with high quality detections (the CompACT and R-CNN methods), while perform bad with low quality detections (the DPM and ACF methods), e.g., there exists huge difference between the average PR-MOTA scores of DPMCMOT and CompACTCMOT, i.e., and , respectively. On the other hand, the CEM method performs relative stably with different detection qualities: there exists little difference between the average PR-MOTA scores of DPMCEM and CompACTCEM, i.e., and , respectively. CMOT, H2T, DCT and IHTLS adopt the local to global optimization strategy to associate the input detections to complete MOT task, which fails to resolve the considerable number of false positives in low quality detections, restricting the MOT performance. However, when we use high quality detections, the strong appearance (the CMOT method) or motion models (H2T and IHTLS methods), and trajectory refining mechanism (the DCT method) constructed in these methods help track the objects accurately to achieve good performance. Different from these methods, CEM uses the global energy minimization strategy to get rid of false positives, which makes these two methods achieve relative better performance using even low quality detections. Since CEM does not focus on exploiting target appearance or motion information specifically, their performance will not be significantly improved even using extremely high quality detections. In summary, it is important to take the advantage of these two categories of methods in constructing a robust MOT system.
Notably, the GOG method produces the top average PR-MOTA scores in the UA-DETRAC benchmark by sacrificing the IDS and FM scores, e.g., GOG produces highest average PR-MOTA score with the highest PR-IDS score (almost times larger than the CEM, DCT, IHTLS, H2T, and CMOT methods) and second highest PR-FM score (almost times larger than the CEM, DCT, H2T, and CMOT methods). That is, GOG pursues to produce all objects’ trajectories, even if it contains some false trajectories. Thus, the GOG method is not competent as other trackers in certain applications, e.g., human-computer interaction, sports analysis, etc., where the tracking accuracy is top priority.
5 Running Efficiency
Since different object detection algorithms requires different platform for testing, e.g., the R-CNN method  requires the GPU for both training and testing, while a CPU desktop is enough for the ACF method , as such, it is hard to compare them fairly. We just report the running time of the evaluated object detection algorithms in Table 6 for reference.
|Platform||CPU: 4Intel Core||CPU: 2Intel Xeon||CPU: 2Intel Xeon||CPU: 2Intel Xeon|
|i7-6600U (2.60GHz)||E5-2470v2 (2.4GHz)||E5-2470v2 (2.4GHz)||E5-2470v2 (2.4GHz)|
|GPU: Tesla K40||GPU: Tesla K40|
Meanwhile, we report the running time of all the evaluated object tracking algorithms in Table 7. That is, for the object tracking algorithms, given the input detection produced by different detection algorithms, i.e., DPM , ACF , R-CNN , and CompACT 
, with the largest F-score, the average execution speeds onsequences in UA-DETRAC-test set are presented in Table 7. We run all the object tracking methods on a laptop with a 2.9 GHz Intel i7 processor and 16 GB memory. Frame-per-second (fps) is used to measure the speed of the tracker.
In this work, we present a large scale multi-object tracking benchmark (UA-DETRAC) consisting of video sequences with rich annotations. We perform comprehensive experiments to evaluate the performance of four object detection and six object tracking methods. Based on our benchmark study, there are mainly two aspects to consider when building MOT systems. First, we should consider object detection and tracking jointly to evaluate performance of MOT systems. We suggest using the UA-DETRAC protocol for this purpose. Second, it is necessary to integrate the object detection and tracking tasks into a unified framework to exploit shared information in constructing a real-world MOT system.
In the future, there are several important directions that we would like to further improve the current work. First, we would like to enrich the current UA-DETRAC dataset to include more sequences and richer annotations. More importantly, we would like to extend this dataset to include several videos for pedestrian detection and tracking evaluations. In terms of the UA-DETRAC evaluation protocol, we will perform theoretical analysis on the specific metric we use, and to further improve the performance of MOT systems.
-  S. Agarwal, A. Awan, and D. Roth. Learning to detect objects in images via a sparse, part-based representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11):1475–1490, 2004.
-  M. Andriluka, S. Roth, and B. Schiele. People-tracking-by-detection and people-detection-by-tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  A. Andriyenko and K. Schindler. Multi-target tracking by continuous energy minimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1265–1272, 2011.
-  A. Andriyenko, K. Schindler, and S. Roth. Discrete-continuous optimization for multi-target tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1926–1933, 2012.
-  S. H. Bae and K. Yoon. Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1218–1225, 2014.
-  F. Bashir and F. Porikli. Performance evaluation of object detection and tracking systems. In PETS, 2006.
-  B. Benfold and I. Reid. Stable multi-target tracking in real-time surveillance video. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3457–3464, 2011.
-  J. Berclaz, F. Fleuret, E. Türetken, and P. Fua. Multiple object tracking using k-shortest paths optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(9):1806–1819, 2011.
-  M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. J. V. Gool. Online multi-person tracking-by-detection from a single, uncalibrated camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(9):1820–1833, 2011.
-  W. Brendel, M. R. Amer, and S. Todorovic. Multiobject tracking as maximum weight independent set. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1273–1280, 2011.
-  Z. Cai, M. Saberian, and N. Vasconcelos. Learning complexity-aware cascades for deep pedestrian detection. In Proceedings of the IEEE International Conference on Computer Vision, 2015.
-  G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. In In Workshop on Statistical Learning in Computer Vision, ECCV, pages 1–22, 2004.
-  N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 886–893, 2005.
-  N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In Proceedings of European Conference on Computer Vision, pages 428–441, 2006.
-  A. Dehghan, S. M. Assari, and M. Shah. GMMCP-Tracker:globally optimal generalized maximum multi clique problem for multiple object tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  A. Delong, A. Osokin, H. N. Isack, and Y. Boykov. Fast approximate energy minimization with label costs. International Journal of Computer Vision, 96(1):1–27, 2012.
-  C. Dicle, O. I. Camps, and M. Sznaier. The way they move: Tracking multiple targets with similar appearance. In Proceedings of the IEEE International Conference on Computer Vision, pages 2304–2311, 2013.
-  P. Dollár, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8):1532–1545, 2014.
-  P. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: An evaluation of the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4):743–761, 2012.
-  A. Ess, B. Leibe, and L. J. V. Gool. Depth and appearance for mobile scene analysis. In Proceedings of the IEEE International Conference on Computer Vision, pages 1–8, 2007.
-  M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, 2015.
-  P. F. Felzenszwalb, R. B. Girshick, D. A. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, 2010.
-  J. M. Ferryman and A. Shahrokni. PETS2009: dataset and challenge. In IEEE International Conference on Advanced Video and Signal-Based Surveillance, pages 1–6, 2009.
-  T. Fortmann, Y. B. Shalom, and M. Scheffe. Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Oceanic Engineering, 8(3):173–184, 1983.
-  A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3354–3361, 2012.
-  R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.
-  C. Huang, Y. Li, and R. Nevatia. Multiple target tracking by learning-based hierarchical association of detection responses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(4):898–910, 2013.
-  S. Hwang, J. Park, N. Kim, Y. Choi, and I. S. Kweon. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  M. Isard and A. Blake. Condensation - conditional density propagation for visual tracking. International Journal of Computer Vision, 29(1):5–28, 1998.
-  H. Izadinia, I. Saleemi, W. Li, and M. Shah. (MP)T: Multiple people multiple parts tracker. In Proceedings of European Conference on Computer Vision, pages 100–114, 2012.
-  H. Jiang, S. Fels, and J. J. Little. A linear programming approach for multiple object tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007.
-  Z. Khan, T. R. Balch, and F. Dellaert. MCMC-based particle filtering for tracking a variable number of interacting targets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(11):1805–1918, 2005.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1106–1114, 2012.
-  C. H. Lampert, M. B. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization by efficient subwindow search. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  L. Leal-Taixé, A. Milan, I. D. Reid, S. Roth, and K. Schindler. Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv preprint, abs/1504.01942, 2015.
-  W. F. Leven and A. D. Lanterman. Unscented kalman filters for multiple target tracking with symmetric measurement equations. IEEE Trans. Automat. Contr., 54(2):370–375, 2009.
-  Y. Li, C. Huang, and R. Nevatia. Learning to associate: Hybridboosted multi-target tracker for crowded scene. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2953–2960, 2009.
-  L. Marchesotti, L. Marcenaro, G. Ferrari, and C. S. Regazzoni. Multiple object tracking under heavy occlusions by using kalman filters based on shape matching. In Proceedings of IEEE International Conference on Image Processing, pages 341–344, 2002.
-  D. Mikami, K. Otsuka, and J. Yamato. Memory-based particle filter for face pose tracking robust under complex dynamics. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 999–1006, 2009.
-  A. Milan, L. Leal-Taixé, I. D. Reid, S. Roth, and K. Schindler. Mot16: A benchmark for multi-object tracking. arXiv preprint, abs/1603.00831, 2016.
-  A. Milan, K. Schindler, and S. Roth. Challenges of ground truth evaluation of multi-target tracking. In CVPR Workshops, pages 735–742, 2013.
-  W. Ouyang and X. Wang. A discriminative deep model for pedestrian detection with occlusion handling. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3258–3265, 2012.
G. Overett, L. Petersson, N. Brewer, L. Andersson, and N. Pettersson.
A new pedestrian dataset for supervised learning.pages 373–378, 2008.
-  C. Papageorgiou and T. Poggio. A trainable system for object detection. International Journal of Computer Vision, 38(1):15–33, 2000.
-  H. Pirsiavash, D. Ramanan, and C. C. Fowlkes. Globally-optimal greedy algorithms for tracking a variable number of objects. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1201–1208, 2011.
-  J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. 2016.
-  D. B. Reid. An algorithm for tracking multiple targets. IEEE Transactions on Automatic Control, 24:843–854, 1979.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, pages 1–42, april 2015.
-  D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1-3):7–42, 2002.
-  X. Shi, H. Ling, W. Hu, C. Yuan, and J. Xing. Multi-target tracking with motion context in tensor power iteration. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3518–3525, 2014.
-  J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 1470–1477, 2003.
-  F. Solera, S. Calderara, and R. Cucchiara. Towards the evaluation of reproducible robustness in tracking-by-detection. In IEEE International Conference on Advanced Video and Signal-Based Surveillance, 2015.
-  R. Stiefelhagen, K. Bernardin, R. Bowers, J. S. Garofolo, D. Mostefa, and P. Soundararajan. The clear 2006 evaluation. In CLEAR, pages 1–44, 2006.
-  S. Tang, B. Andres, M. Andriluka, and B. Schiele. Subgraph decomposition for multi-target tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 5033–5041, 2015.
-  K. E. A. van de Sande, J. R. R. Uijlings, T. Gevers, and A. W. M. Smeulders. Segmentation as selective search for object recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 1879–1886, 2011.
-  P. A. Viola and M. J. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004.
-  L. Wen, Z. Lei, S. Lyu, S. Z. Li, and M.-H. Yang. Exploiting hierarchical dense structures on hypergraphs for multi-object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.
-  L. Wen, W. Li, Z. Lei, D. Yi, and S. Z. Li. Multiple target tracking based on undirected hierarchical relation hypergraph. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3457–3464, 2014.
-  C. Wojek, S. Walk, and B. Schiele. Multi-cue onboard pedestrian detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 794–801, 2009.
-  Z. Wu, N. W. Fuller, D. H. Theriault, and M. Betke. A thermal infrared video benchmark for visual analysis. In Workshops in Conjunction with IEEE Conference on Computer Vision and Pattern Recognition, pages 201–208, 2014.
-  J. Yan, Z. Lei, L. Wen, and S. Z. Li. The fastest deformable part model for object detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2497–2504, 2014.
-  B. Yang and R. Nevatia. Online learned discriminative part-based appearance models for multi-human tracking. In Proceedings of European Conference on Computer Vision, pages 484–498, 2012.
-  M. Yang, Y. Liu, L. Wen, Z. You, and S. Z. Li. A probabilistic framework for multitarget tracking with mutual occlusions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1298–1305, 2014.
-  S. Yang, P. Luo, C. C. Loy, and X. Tang. WIDER FACE: A face detection benchmark. CoRR, abs/1511.06523, 2015.
-  A. R. Zamir, A. Dehghan, and M. Shah. GMCP-Tracker: Global multi-object tracking using generalized minimum clique graphs. In Proceedings of European Conference on Computer Vision, pages 343–356, 2012.
-  L. Zhang, R. Chu, S. Xiang, S. Liao, and S. Z. Li. Face detection based on multi-block LBP representation. In ICB, pages 11–18, 2007.
-  L. Zhang, Y. Li, and R. Nevatia. Global data association for multi-object tracking using network flows. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In Proceedings of European Conference on Computer Vision, pages 391–405, 2014.
We give the proofs about the range of the PR-MOTA score . The PR-MOTA score is defined as the line integral along the PR curve, i.e.,
where is the PR curve, and is the MOTA value corresponding to the precision and recall on the PR curve. Since any MOTA score , we have the lower bound of is . We present the upper bound of as follows. Let be the dividing points on the PR curve, where the -th point is corresponding to the prevision and recall on the PR curve, and and are two end points of the PR curve. We denote the length of the -th arc determined by is . Thus, we have . Let . Then, the PR-MOTA score can be represented as
, we have . Thus, we have
Since the precision and recall on the PR curve are in the interval , i.e., and , we have and . In this way, we obtain . Notably, the equal sign is achieved under two constraints, i.e., 1) precision , recall , and recall , precision ; 2) For any precision and recall values of the input detection, the object tracking method can always achieve the excellent MOTA score . The two constraints are the idea cases, which are impractical in real applications.