Computer vision has been attracting increasing amounts of attention in recent years due to its wide range of applications and recent breakthroughs in many important problems. As two core problems in computer vision, object detection and object tracking are under extensive investigation in both academia and real world applications, e.g., transportation surveillance, smart city, and human-computer interaction. Among many factors and efforts that lead to the fast evolution of computer vision techniques, a notable contribution should be attributed to the invention or organization of numerous benchmarks, such as Caltech , KITTI 
, ImageNet, and MS COCO  for object detection, and OTB , VOT , MOTChallenge , and UA-DETRAC  for object tracking.
Drones (or UAVs) equipped with cameras have been fast deployed to a wide range of applications, including agricultural, aerial photography, fast delivery, surveillance, etc. Consequently, automatic understanding of visual data collected from these platforms become highly demanding, which brings computer vision to drones more and more closely. Despite the great progresses in general computer vision algorithms, such as detection and tracking, these algorithms are not usually optimal for dealing with sequences or images captured by drones, due to various challenges such as view point changes and scales. Consequently, developing and evaluating new vision algorithms for drone generated visual data is a key problem in drone-based applications. However, as pointed out in [9, 10], studies toward this goal is seriously limited by the lack of publicly available large-scale benchmarks or datasets. Some recent efforts [9, 11, 10] have been devoted to construct datasets with drone platform focusing on object detection or tracking. These datasets are still limited in size and scenarios covered, due to the difficulties in data collection and annotation. Thorough evaluations of existing or newly developed algorithms remain an open problem. Thus, a more general and comprehensive benchmark is desired for further boosting visual analysis research on drone platforms.
Thus motivated, we present a large scale benchmark, named VisDrone2018, with carefully annotated ground-truth for various important computer vision tasks, to make vision meets drones. The benchmark dataset consists of video clips formed by frames and static images, captured by various drone-mounted cameras, diverse in a wide range of aspects including location (taken from different cities in China), environment (urban and country), objects (pedestrian, vehicles, bicycles, etc.), and density (sparse and crowded scenes), etc. With thorough annotations of over million object instances, the benchmark focuses on four tasks:
Task 1: object detection in images. Given a predefined set of object classes (e.g., cars and pedestrians), the task aims to detect objects of these classes from individual images taken from drones.
Task 2: object detection in videos. The task is similar to Task 1, except that objects are detected from videos taken from drones.
Task 3: single object tracking.
The task aims to estimate the state of a target, indicated in the first frame, across frames in an online manner.
Task 4: multi-object tracking. The task aims to recover the object trajectories with (Task 4B) or without (Task 4A) the detection results in each video frame.
In this challenge we select ten categories of objects of frequent interests in drone applications, such as pedestrians and cars. Altogether we carefully annotated more than million bounding boxes of object instances from these categories. Moreover, some important attributes including visibility of scenes, object category and occlusion, are provided for better data usage. The detailed comparison of the provided drone datasets with other related benchmark datasets in object detection and tracking are presented in Table I.
|Object detection in images||scenario||#images||categories||avg. #labels/categories||resolution||occlusion labels||year|
|ETHZ Pedestrian ||life||2007|
|EPFL Multi-View Car ||exhibition||2009|
|Caltech Pedestrian ||driving||2012|
|KITTI Detection ||driving||2012|
|PASCAL VOC2012 ||life||2012|
|ImageNet Object Detection ||life||2013|
|MS COCO ||life||2014|
|Object detection in videos||scenario||#frames||categories||avg. #labels/categories||resolution||occlusion labels||year|
|ImageNet Video Detection ||life||2015|
|UA-DETRAC Detection ||surveillance||2015|
|Single object tracking||scenarios||#sequences||#frames||year|
|POT 210 ||planar objects||2018|
|Multi-object tracking||scenario||#frames||categories||avg. #labels/categories||resolution||occlusion labels||year|
|KITTI Tracking ||driving||2013|
|MOTChallenge 2015 ||surveillance||2015|
|UA-DETRAC Tracking ||surveillance||2015|
2 Related Work
In recent years, the computer vision community has developed various benchmarks for numerous tasks, including generic object detection [4, 3], pedestrian detection , single object tracking [5, 24], multi-object tracking [7, 8], 3D reconstruction , and optical flow [29, 2], which are extremely helpful to advance the state of the art in the respective areas. In this section, we review the most relevant drone-based benchmarks and other benchmarks in object detection and tracking fields.
2.1 Drone-based Datasets
To date, there only exists a handful of drone-based datasets in computer vision field. Hsieh et al. present a dataset for car counting, which consists of images captured in parking lot scenarios with the drone platform, including annotated cars. Robicquet et al. collect several video sequences with the drone platform in campuses, including various types of objects, (i.e., pedestrians, bikes, skateboarders, cars, buses, and golf carts), which enable the design of new object tracking and trajectory forecasting algorithms. Barekatain  present a new Okutama-Action dataset for concurrent human action detection with the aerial view. The dataset includes minute-long fully-annotated sequences with action classes. In , a high-resolution UAV123 dataset is presented for single object tracking, which contains aerial video sequences with () fully annotated frames, including the bounding boxes of people and their corresponding action labels. Li et al. capture video sequences of high diversity by drone cameras and manually annotate the bounding boxes of objects for single object tracking evaluation. In , Rozantsev et al.present two separate datasets for detecting flying objects, i.e., the UAV dataset and the aircraft dataset. The former one comprises video sequences with the resolution and annotated bounding boxes of objects, acquired by a camera mounted on a drone flying indoors and outdoors. The latter one consists of publicly available videos of radio-controlled planes with annotated bounding boxes. In contrast to the aforementioned datasets acquired in constrained scenarios for single object tracking or object detection and counting, our VisDrone2018 dataset is captured in various unconstrained urban scenes, focusing on four core problems in computer vision fields, i.e., object detection in images, object detection in videos, single object tracking, and multi-object tracking.
2.2 Object Detection Datasets
Several object detection benchmarks have been collected for evaluating object detection algorithms. Enzweiler and Gavrila  present the Daimler dataset, captured by a vehicle driving through urban environment. The dataset includes manually annotated pedestrians in video images in the training set, and video images with annotated pedestrians in the testing set. Caltech  consists of approximately hours of 30Hz videos taken from a vehicle driving through regular traffic in an urban environment. It contains frames with a total of annotated bounding boxes of unique pedestrians. KITTI-D  is designed to evaluate the car, pedestrian, and cyclist detection algorithms in autonomous driving scenarios, with training and testing images. Mundhenk et al. create a large dataset for classification, detection and counting of cars, which contains unique cars from six different image sets, each covering a different geographical location and produced by different imagers. The recent UA-DETRAC benchmark [8, 33] provides objects in frames for vehicle detection.
The PASCAL VOC dataset [34, 35] is one of the pioneering work in generic object detection filed, which is designed to provide a standardized test bed for object detection, image classification, object segmentation, person layout, and action classification. ImageNet [36, 3] follows the footsteps of the PASCAL VOC dataset by scaling up more than an order of magnitude in number of object classes and images, i.e., PASCAL VOC 2012 has object classes and images vs. ILSVRC2012 with object classes and annotated images. Recently, Lin et al. release the MS COCO dataset, containing more than images with million manually segmented object instances. It has object categories with instances on average per category. Notably, it contains object segmentation annotations which are not available in ImageNet.
2.3 Object Tracking Datasets
Single object tracking is one of the fundamental problems in computer vision, which aims to estimate the trajectory of a target in a video sequence, with its given initial state. In recent years, numerous datasets have been developed for single object tracking evaluation. Wu et al. develop a standard platform to evaluate the single object tracking algorithms, and scale up the data size from sequences to sequences in . Similarly, Liang et al. collect video sequences for evaluating the color enhanced trackers. To track the progress in visual tracking field, Kristan et al.[38, 39, 24] organize a VOT competition from to by presenting new datasets and evaluation strategies for tracking evaluation. Smeulders et al. present the ALOV300 dataset, which contains video sequences with visual attributes, such as long duration, zooming camera, moving camera and transparency. Li et al. construct a large-scale dataset with video sequences of pedestrians and rigid objects, covering kinds of objects captured from moving cameras. Du et al. design a dataset including annotated video sequences, focusing on deformable object tracking in unconstrained environments. To evaluate tracking algorithms in higher frame rate video sequences, Galoogahi et al. propose a dataset including videos ( frames) recorded by the higher frame rate cameras ( frame per second) from real world scenarios. Besides using video sequences captured by RGB cameras, Felsberg et al.[42, 43] organize a series of competitions from 2015 to 2017, focusing on visual tracking on thermal video sequences recorded by eight different types of sensors. In , a RGB-D tracking dataset is presented, which includes video clips with RGB and depth channels and manually annotated ground truth bounding boxes.
Multi-object tracking is another important research problem with many applications, such as surveillance, behavior analysis, and autonomous driving. Some of the most widely used multi-object tracking evaluation datasets include the PETS09 , PETS16 , KITTI-T , MOTChallenge [7, 47], and UA-DETRAC [8, 33]. Specifically, the PETS09  and PETS16  datasets mainly focus on multi-pedestrian detection, tracking and counting in the surveillance scenarios. KITTI-T  is designed for object tracking in autonomous driving, which is recorded from a moving vehicle with viewpoint of the driver. MOT15  and MOT16  aim to provide a unified dataset, platform, and evaluation protocol for multiple object tracking algorithms, including and sequences respectively. Recently, the UA-DETRAC benchmark [8, 33] is constructed, which contains a total of sequences to track multiple vehicles, where sequences are filmed from a surveillance viewpoint.
Moreover, in some scenarios, a network of cameras are set up to capture multi-view information to help multi-object tracking. The datasets in [48, 45] are recorded using multi-camera with fully overlapping views in constrained environments. Other datasets are captured by non-overlapping cameras. For example, Chen et al. collect four datasets, each of which includes to cameras with non-overlapping views in real scenes and simulation environments. In , the dataset is captured by cameras in the campus environments with the resolution of and minutes length. Zhang et al. develop a dataset composed of to cameras covering both indoor and outdoor scenes at a university. Ristani et al. organize a challenge and present a large-scale fully-annotated and calibrated dataset, including more than million 1080p video frames taken by cameras with more than identities.
3 VisDrone2018 Benchmark Dataset
3.1 Dataset Collection
A critical basis for effective algorithm evaluation is a thorough dataset. For this purpose, in VisDrone2018, we systematically collected the largest, to the best of our knowledge, drone image/video dataset. Our dataset consists of video clips with frames and additional static images. The videos/images are acquired by various drone platforms, i.e., DJI Mavic, Phantom series (3, 3A, 3SE, 3P, 4, 4A, 4P), including different scenarios across different cites in China, i.e., Tianjin, Hongkong, Daqing, Ganzhou, Guangzhou, Jincang, Liuzhou, Nanjing, Shaoxing, Shenyang, Nanyang, Zhangjiakou, Suzhou and Xuzhou. The dataset covers various weather and lighting conditions, representing diverse scenarios in our daily life. The maximal resolutions of video clips and static images are and , respectively. Some example images and video clips are shown in Figures 1 and 2.
A website: www.aiskyeye.com is constructed for accessing the VisDrone2018 benchmark and perform evaluation of the four tasks, i.e., (1) object detection in images, (2) object detection in videos, (3) single object tracking, and (4) multi-object tracking. Specifically, a user is required to create an account using an institutional email address. After registration, the user can choose the tasks which she or he decides to participate, and submit the results using the corresponding account. Notably, for each task, the images/videos in the training, validation, and testing sets are captured at different locations, but with similar scenarios. The manually annotated ground truths for training and validation are made available to users, but the ground truths of the testing set are reserved in order to avoid (over)fitting of algorithms. We encourage the participants to use the provided training data, but also allow them to use additional training data. The use of additional training data must be indicated during submission. In the following subsections, we describe each task and the corresponding data annotation and statistics in details.
3.2 Task 1: Object Detection in Images
Given an input image and a predefined set of object categories, e.g., car and pedestrian, the task of object detection (in images) aims to locate all the object instances in these categories from the image (if any). Typically and in our benchmark, for each object class, we require a detection algorithm to predict the bounding box of each instance of that class in the image, with a real-valued confidence. The VisDrone2018 provides a dataset of images for this task, with images used for training, for validation and for testing. The images of the three subsets are taken at different locations, but share similar environments and attributes. We plot the number of objects per image vs. percentage of images in each subset to show the distributions of the number of objects in each image of the training, validation and testing sets in Figure 5.
For object categories, we mainly focus on human and vehicles in our daily life, and define ten object categories of interest including pedestrian, person111 If a human maintains standing pose or walking, we classify it as
If a human maintains standing pose or walking, we classify it aspedestrian; otherwise, it is classified as a person., car, van, bus, truck, motor, bicycle, awning-tricycle, and tricycle. Some rarely occurring special vehicles (e.g., machineshop truck, forklift truck, and tanker) are ignored in evaluation. We manually annotate the bounding boxes of different categories of objects in each image. In addition, we also provide two kinds of useful annotations, occlusion ratio and truncation ratio. Specifically, we use the fraction of objects being occluded to define the occlusion ratio, and define three degrees of occlusions: no occlusion (occlusion ratio ), partial occlusion (occlusion ratio ), and heavy occlusion (occlusion ratio ). For truncation ratio, it is used to indicate the degree of object parts appears outside a frame. If an object is not fully captured within a frame, we annotate the bounding box across the frame boundary and estimate the truncation ratio based on the region outside the image. It is worth mentioning that a target is skipped during evaluation if its truncation ratio is larger than . We show some annotated examples in Figure 3, and present the number of objects with different occlusion degrees of different object categories in the training, validation, and testing sets in Figure 6.
3.2.1 Evaluation Criteria
We require each evaluated algorithm in Task 1 (object detection in images) to output a list of detected bounding boxes with confidence scores for each test image. Following the evaluation protocol in MS COCO , we use the AP, AP, AP, AR, AR, AR and AR metrics to evaluate the results of detection algorithms. These criteria penalize missing detection of objects as well as duplicate detections (two detection results for the same object instance). Specifically, AP is computed by averaging over all intersection over union (IoU) thresholds (i.e., in the range with the uniform step size ) of all categories, which is used as the primary metric for ranking. AP and AP are computed at the single IoU thresholds and over all categories, respectively. The AR, AR, and AR scores are the maximum recalls given , , and detections per image, averaged over all categories and IoU thresholds. Please refer to  for more details.
3.3 Task 2: Object Detection in Videos
Similar to Task 1, the task of object detection in videos aims to locate object instances from a predefined set of categories, but the detection is from a video instead of a static image as in Task 1. Specifically, given a video clip, a detection algorithm is required to produce a set of bounding boxes of each object instance in each video frame (if any), with real-valued confidences. We provide challenging video clips for the task, including clips for training ( frames in total), for validation ( frames in total) and for testing ( frames in total). The videos of the three subsets are recorded at different locations, but share similar environments and attributes. We plot the number of objects per frame vs. percentage of frames for training, validation, and testing sets in Figure 8.
We use the same object categories as that in (Task 1) and provide manually annotated ground truth bounding boxes in each video frame. Similar to (Task 1), we also provide the annotations of occlusion and truncation ratios of each object. We show some annotated examples in Figure 4, and present the number of objects with different occlusion degrees of different object categories in training, validation, and testing sets in Figure 9.
3.3.1 Evaluation Criteria
For Task 2, we require each evaluated algorithm to generate a list of bounding box detections with confidences in each video frame. Motivated by the evaluation protocols in MS COCO  and ILSVRC , we use the AP, AP, AP, AR, AR, AR and AR metrics to evaluate the results of detection algorithms, which is similar to Task 1. Notably, the AP score is used as the primary metric for ranking methods. Please see [4, 52] for more details.
3.4 Task 3: Single Object Tracking
While the term “object tracking” can be sometimes ambiguous, in Task 3 we focus on generic single object tracking, also known as model-free tracking. In particular, for an input video sequence and the initial bounding box of the target object in the first frame, Task 3 requires a tracking algorithm to locate the target bounding boxes in the subsequent video frames. We provide video sequences with manually annotated target ground truths. Unlike most previous single object tracking benchmarks, we divide all these video clips into training, validation, and testing sets, with sequences ( frames in total), sequences ( frames in total) and sequences ( frames in total), respectively. The tracking targets in these sequences include pedestrians, cars, buses, and animals. Some annotated examples and the statistics of targets are presented in Figure 7 and 10.
3.4.1 Evaluation Criteria
For the single object tracking task, the performance is evaluated by the success and precision scores, same as in . Specifically, we plot the percentage of successfully tracked frames vs. the bounding box overlap threshold, and use the area under the curve (AUC) as the evaluation criterion. Meanwhile, we also plot the percentage curve of frames where the centers of the tracked object are within the given threshold distance to the ground truth, and use the percentage at the threshold of pixels as the precision score in evaluation. Notably, the success score is used as the primary metric for ranking methods.
3.5 Task 4: Multi-Object Tracking
Given an input video sequence, multi-object tracking aims to recover the trajectories of objects in the video. The task uses the same data as in Task 2 (i.e., object detection in videos). Depends on the availability of prior object detection results, Task 4 is divided into two sub-tasks, denoted by Task 4A (without prior detection) and Task 4B (with prior detection). Specifically, for Task 4A, an evaluated algorithm is required to recover the trajectories of objects in video sequences without taking the object detection results as input. By contrast, for Task 4B, prior object detection results are provided and an evaluated algorithm can work on top of the prior detection. The number of objects vs. percentage of frames in training, validation, and testing sets are plotted in Figure 8; some annotated examples are given in Figure 11; and the number of objects with different occlusion degrees of different object categories in training, validation, and testing sets are presented in Figure 9.
3.5.1 Evaluation Criteria
For Task 4A, we use the protocol in  to evaluate the tracking performance. Specifically, each algorithm is required to output a list of bounding boxes with confidence scores and the corresponding identities. We sort the tracklets (formed by the bounding box detections with the same identity) according to the average confidence of their bounding box detections. A tracklet is considered correct if the intersection over union (IoU) overlap with ground truth tracklet is larger than a threshold. Similar to , we use three thresholds in evaluation, i.e., , , and . The performance of an algorithm is evaluated by averaging the mean average precision (mAP) per object class over different thresholds. Please refer to  for more details.
For Task 4B, we use the protocol in  to evaluate the algorithm performance. More specifically, the average rank of metrics (i.e., MOTA, IDF1, FAF, MT, ML, FP, FN, IDS, FM, and Hz) is used to compare different algorithms. The MOTA metric combines three error sources: FP, FN and IDS. The IDF1 metric indicates the ratio of correctly identified detections over the average number of ground truth and computed detections. The FAF metric indicates the average number of false alarms per frame. The FP metric describes the total number of tracker outputs which are the false alarms, and FN is the total number of targets missed by any tracked trajectories in each frame. The IDS metric describes the total number of times that the matched identity of a tracked trajectory changes, while FM is the total number of times that trajectories are disconnected. Both the IDS and FM metrics reflect the accuracy of tracked trajectories. The ML and MT metrics measure the percentage of tracked trajectories less than and more than of the time span based on the ground truth respectively. The Hz metric indicates the processing speed of the algorithm.
We introduce a new large-scale benchmark, VisDrone2018, to facilitate the research of object detection and tracking on the drone platform. With over worker hours, a vast collection of object instances are gathered, annotated, and organized to drive the advancement of object detection and tracking algorithms. We place emphasis on capturing images and video clips in real life environments. Notably, the dataset is recorded over different cites in China with various drone platforms, featuring a diverse real-world scenarios. We provide a rich set of annotations including more than million annotated object instances along with several important attributes. The VisDrone2018 benchmark is made available to the research community through the project website: www.aiskyeye.com. We expect the benchmark to largely boost the research and development in visual analysis on drone platforms.
-  P. Dollár, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 743–761, 2012.
A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the
KITTI vision benchmark suite,” in
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
-  T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” in Proceedings of European Conference on Computer Vision, 2014, pp. 740–755.
-  Y. Wu, J. Lim, and M. Yang, “Object tracking benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834–1848, 2015.
-  L. Cehovin, A. Leonardis, and M. Kristan, “Visual object tracking performance measures revisited,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1261–1274, 2016.
-  L. Leal-Taixé, A. Milan, I. D. Reid, S. Roth, and K. Schindler, “Motchallenge 2015: Towards a benchmark for multi-target tracking,” CoRR, vol. abs/1504.01942, 2015.
-  L. Wen, D. Du, Z. Cai, Z. Lei, M. Chang, H. Qi, J. Lim, M. Yang, and S. Lyu, “UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking,” CoRR, vol. abs/1511.04136, 2015.
-  M. Mueller, N. Smith, and B. Ghanem, “A benchmark and simulator for UAV tracking,” in Proceedings of European Conference on Computer Vision, 2016, pp. 445–461.
-  M. Hsieh, Y. Lin, and W. H. Hsu, “Drone-based object counting by spatially regularized regional proposal network,” in Proceedings of the IEEE International Conference on Computer Vision, 2017.
-  A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, “Learning social etiquette: Human trajectory understanding in crowded scenes,” in Proceedings of European Conference on Computer Vision, 2016, pp. 549–565.
-  S. Agarwal, A. Awan, and D. Roth, “Learning to detect objects in images via a sparse, part-based representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1475–1490, 2004.
-  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 886–893.
-  A. Ess, B. Leibe, and L. J. V. Gool, “Depth and appearance for mobile scene analysis,” in Proceedings of the IEEE International Conference on Computer Vision, 2007, pp. 1–8.
-  M. Andriluka, S. Roth, and B. Schiele, “People-tracking-by-detection and people-detection-by-tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  M. Özuysal, V. Lepetit, and P. Fua, “Pose estimation for category specific multiview object localization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 778–785.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html.
-  S. Razakarivony and F. Jurie, “Vehicle detection in aerial imagery : A small target detection benchmark,” Journal of Visual Communication and Image Representation, vol. 34, pp. 187–203, 2016.
T. N. Mundhenk, G. Konjevod, W. A. Sakla, and K. Boakye, “A large contextual dataset for classification, detection and counting of cars with deep learning,” inProceedings of European Conference on Computer Vision, 2016, pp. 785–800.
-  “Mot17 challenge,” https://motchallenge.net/.
-  M. Barekatain, M. Martí, H. Shih, S. Murray, K. Nakayama, Y. Matsuo, and H. Prendinger, “Okutama-action: An aerial view video dataset for concurrent human action detection,” in Workshops in Conjunction with the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2153–2160.
-  A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: An experimental survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442–1468, 2014.
-  P. Liang, E. Blasch, and H. Ling, “Encoding color information for visual tracking: Algorithms and benchmark,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5630–5644, 2015.
-  M. K. etc., “The visual object tracking VOT2016 challenge results,” in Workshops in Conjunction with the European Conference on Computer Vision, 2016, pp. 777–823.
-  H. K. Galoogahi, A. Fagg, C. Huang, D. Ramanan, and S. Lucey, “Need for speed: A benchmark for higher frame rate object tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1134–1143.
-  P. Liang, Y. Wu, H. Lu, L. Wang, C. Liao, and H. Ling, “Planar object tracking in the wild: A benchmark,” in Proceedings of the IEEE Int’l Conference on Robotics and Automation (ICRA), 2018.
-  E. Ristani, F. Solera, R. S. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in Workshops in Conjunction with the European Conference on Computer Vision, 2016, pp. 17–35.
-  S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 519–528.
-  S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” International Journal of Computer Vision, vol. 92, no. 1, pp. 1–31, 2011.
S. Li and D. Yeung, “Visual object tracking for unmanned aerial vehicles: A
benchmark and new motion models,” in
Association for the Advancement of Artificial Intelligence, 2017, pp. 4140–4146.
-  A. Rozantsev, V. Lepetit, and P. Fua, “Detecting flying objects using a single moving camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 5, pp. 879–892, 2017.
-  M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection: Survey and experiments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2179–2195, 2009.
-  S. Lyu, M. Chang, D. Du, L. Wen, H. Qi, Y. Li, Y. Wei, L. Ke, T. Hu, M. D. Coco, P. Carcagnì, and et al., “UA-DETRAC 2017: Report of AVSS2017 & IWT4S challenge on advanced traffic monitoring,” in IEEE International Conference on Advanced Video and Signal Based Surveillance, 2017, pp. 1–7.
-  M. Everingham, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010.
-  M. Everingham, S. M. A. Eslami, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, no. 1, pp. 98–136, 2015.
-  J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li, “Imagenet: A large-scale hierarchical image database,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
-  Y. Wu, J. Lim, and M. Yang, “Online object tracking: A benchmark,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 2411–2418.
-  M. K. et al., “The visual object tracking VOT2014 challenge results,” in Workshops in Conjunction with the European Conference on Computer Vision, 2014, pp. 191–217.
-  M. Kristan, J. Matas, A. Leonardis, M. Felsberg, L. Cehovin, G. Fernández, T. Vojír, G. Häger, G. Nebehay, and R. P. Pflugfelder, “The visual object tracking VOT2015 challenge results,” in Workshops in Conjunction with the IEEE International Conference on Computer Vision, 2015, pp. 564–586.
-  A. Li, M. Li, Y. Wu, M.-H. Yang, and S. Yan, “NUS-PRO: A new visual tracking challenge,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, pp. 1–15.
-  D. Du, H. Qi, W. Li, L. Wen, Q. Huang, and S. Lyu, “Online deformable object tracking based on structure-aware hyper-graph,” IEEE Transactions on Image Processing, vol. 25, no. 8, pp. 3572–3584, 2016.
-  M. Felsberg, A. Berg, G. Häger, J. Ahlberg, M. Kristan, J. Matas, A. Leonardis, L. Cehovin, G. Fernández, T. Vojír, G. Nebehay, and R. P. Pflugfelder, “The thermal infrared visual object tracking VOT-TIR2015 challenge results,” in Workshops in Conjunction with the IEEE International Conference on Computer Vision, 2015, pp. 639–651.
-  M. F. et al., “The thermal infrared visual object tracking VOT-TIR2016 challenge results,” in Workshops in Conjunction with the European Conference on Computer Vision, 2016, pp. 824–849.
-  S. Song and J. Xiao, “Tracking revisited using RGBD camera: Unified benchmark and baselines,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 233–240.
-  J. Ferryman and A. Shahrokni, “Pets2009: Dataset and challenge,” in IEEE International Conference on Advanced Video and Signal-Based Surveillance, 2009, pp. 1–6.
-  L. Patino, “PETS2016 dataset website,” http://www.cvg.reading.ac.uk/PETS2016/a.html, [Online; accessed 23-Februay-2017].
-  A. Milan, L. Leal-Taixé, I. D. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark for multi-object tracking,” arXiv preprint, vol. abs/1603.00831, 2016.
-  F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multicamera people tracking with a probabilistic occupancy map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 267–282, 2008.
-  W. Chen, L. Cao, X. Chen, and K. Huang, “An equalized global graph model-based approach for multicamera object tracking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 11, pp. 2367–2381, 2017.
-  C. Kuo, C. Huang, and R. Nevatia, “Inter-camera association of multi-target tracks by on-line learned appearance affinity models,” in Proceedings of European Conference on Computer Vision, 2010, pp. 383–396.
-  S. Zhang, E. Staudt, T. Faltemier, and A. K. Roy-Chowdhury, “A camera network tracking (camnet) dataset and performance baseline,” in IEEE Winter Conference on Applications of Computer Vision, 2015, pp. 365–372.
-  E. Park, W. Liu, O. Russakovsky, J. Deng, F.-F. Li, and A. Berg, “Large Scale Visual Recognition Challenge 2017,” http://image-net.org/challenges/LSVRC/2017.
-  A. Milan, L. Leal-Taixé, K. Schindler, D. Cremers, S. Roth, and I. Reid, “Multiple Object Tracking Benchmark 2016,” https://motchallenge.net/results/MOT16/.