Drone or general unmanned aerial vehicle (UAV) is widely used in our daily life. Specifically, the application scenarios of drone based visual tracking cover live broadcast, military battlefield, criminal investigation, sports and entertainment [39, 54, 17]. Compared with static cameras and handheld mobile devices, drones can dynamically move and cover a wide ground area, which is very suitable to track fast-moving targets.
To perform robust tracking on drones, large-scale datasets with high-quality annotations play a critical role to promote the development of algorithms. Recently, several benchmark datasets have been collected with a single drone, including UAV123 , Campus , VisDrone-2018 , and UAVDT . As shown in Figure 1, compared with static cameras that only collect data in a certain area, the drone can also dynamically track the target in the air from a broad view. However, it also brings additional challenges to visual tracking, including tiny targets, camera motion, high density distribution of targets, etc.
To solve the above issue, incorporating multiple drones is an effective solution to improve the performance and robustness of object tracking to occlusion and appearance ambiguities. Thus, several algorithms focus on long-term tracking and re-identification based on multiple cameras in video surveillance [5, 43, 45, 6, 9, 25, 44, 8, 30]. In the past few years, a few multi-camera benchmark datasets have been constructed with overlapping or non-overlapping views of cameras [1, 53, 44]. Some datasets with fully overlapped views are constrained to short time intervals and controlled conditions . These multi-camera datasets are specially collected for multi-object tracking or person re-identification across cameras.
Although many datasets are provided for visual tracking, they are built for either single drone tracking or multi-camera tracking. However, there are few benchmark datasets for multi-drone visual tracking. In this paper, to combine the advantages of both drone based tracking and multi-camera tracking, we present a multi-drone single object tracking (MDOT) dataset. MDOT consists of groups of video clips with high resolution frames taken by two drones and groups of video clips with high resolution frames taken by three drones. In each group of video clips, the same target is tracked by multiple drones. Moreover, we annotate different types of attributes, including daylight, night, camera motion, partial occlusion, full occlusion, out of view, similar object, viewpoint change, illumination variation, and low resolution. To evaluate tracking algorithms in our dataset, we propose two new evaluation metrics, i.e., adaptive fusion score (AFS) and ideal fusion score (IFS). Specifically, AFS measures the performance of multi-drone tracker that fuses the tracking results using the online fusing strategy, while IFS is the ideal fusion performance when we assume that the multi-drone system can accurately select the tracking results of the drone with better performance. On the other hand, to use multi-drone complementarity, we propose an agent sharing network (ASNet), which shares templates across drones in a self-supervised manner and fuses the tracking results automatically for robust and accurate visual tracking. A re-detection strategy on drone based tracker is proposed to deal with drift of targets by enlarging the size of search region when the target is judged to satisfy the defined condition. Experiments on MDOT demonstrate that ASNet greatly outperforms recent state-of-the-art tracking algorithms.
The contribution of this paper can be summarized as follows.
To the best of our knowledge, we propose the first multi-drone single object tracking dataset (MDOT) which consists of total groups of video clips with high resolution frames and rich annotations.
Two new evaluation metrics are designed for multi-drone single object tracking, i.e., automatic fusion score (AFS) and ideal fusion score (IFS). AFS evaluates the performance of multi-drone tracker that fuses the tracking results using the learned weights while IFS is the theoretically optimal fusion performance (upper bound) of a multi-drone tracker. IFS is specially proposed to inspire researchers to design superior multi-drone tracker with more useful fusion strategies.
The agent sharing network (ASNet) is proposed for perform multi-drone visual tracking, which effectively exploits multi-drone shared information in a view-aware manner. ASNet can be considered as a baseline tracker for the multi-drone based single object tracking task.
Ii Related Work
Ii-a Single Object Tracking.
In the field of visual tracking, single object tracking has achieved massive attention. Generally, single object trackers can be categorized into generative models and discriminative ones. Generative models search for the most similar area to the template of the previous frames, e..g, Kalman filtering and mean-shift. Discriminative models treat visual tracking as a binary classification task, which distinguish the target from the background, e.g., Struck  and TLD .
Inspired by the success of deep learning in image classification and object detection, deep trackers have achieved superior performances. MDNet learns an end-to-end deep tracker upon convolutional neural network by a video-specific fully connected layer. Siam-FC exploits a siamese network by learning the feature maps of both the target and the search region, and using a convolutional operation to obtain the response map . Recently, some successful techniques in object detection, e.g., region proposal network (RPN), are embedded into object tracking models. SiamRPN and its variants (DaSiamRPN , SiamRPN++  and C-RPN 
), are proposed by using more powerful network structure as backbone and attractive blocks to learn feature maps with better representation abilities. Discriminative correlation filter (DCF) can be learned very efficiently in the frequency domain via fast Fourier transform and therefore achieves very impressive tracking efficiency[24, 38, 16, 40, 28, 36, 33, 52]
. To cope with the severe variations, discriminative feature representation, nonlinear kernel, scale estimation, spatial regularization, continuous convolution, spatial-temporal regularization are introduced to pursue a balance between accuracy and speed for correlation filter based trackers. The performance of single object tracking is easily affected by severe appearance variations, occlusions, and out-of view cases, which could be solved by using multiple cameras.
|Datasets||Scenarios||Sequences||Frames||# of Cameras||Year|
|POT210 ||planar objects||Single||2018|
Ii-B Multi-Camera Tracking.
Multi-camera tracking uses information of different views by estimating a common axis or subspace, or fusing multi-view information, to improve the robustness of trackers to occlusion, drift and other variations. Most existing works focus on multi-object tracking, especially multi-person detection and tracking in overlapping views or across non-overlapping views [43, 9, 44, 45, 30]. The single-camera trajectories can be previously given or obtained by pedestrian detection and tracking. As multiple cameras are static, the spatial relations between cameras are either explicitly mapped in 3D, learned by tracking known identities, or obtained by comparing entry/exit rates across pairs of cameras. The multi-camera information can be fused in different stages. Single-camera tracking is first performed in each camera to create trajectories of multiple targets, and then inter-camera tracking is carried out to associate the tracks 
. Trackers exploit optimization to maximize the coherence of observations for predicted identities. The spatial, temporal and appearance information of trajectories is used to construct an affinity matrix. The nodes are then partitioned into different identities by bipartite matching or maximal internal weights.
The recent progresses in visual tracking rely on scarcely-available large scale tracking datasets with high-quality annotations to a great extent. There have been significant growth of the volume and diversity of benchmark datasets for visual tracking, e.g., TC-128 , OTB-2015 , NUS-PRO , VOT2016 , LaSOT , and TrackingNet ). Most datasets focus on single camera and single object tracking. For drone based visual tracking, several benchmark datasets have been collected with a single drone, including UAV123 , Campus , VisDrone-2018 , and UAVDT . UAV123 dataset contains a total of 123 video sequences and more than 110K frames for single object tracking . Campus dataset includes 929.5k frames which contains various types of objects . VisDrone-2018 dataset consists of 263 video clips formed by 179,264 frames and 10,209 static images from 14 different cities for object detection and tracking in both images and videos . UAVDT dataset consists of 80,000 representative frames with bounding boxes as well as up to 14 kinds of attributes from 10 hours raw videos for object detection and tracking in videos . To track and identify multiple objects across different cameras, a few multi-camera benchmark datasets are collected for multi-object tracking and person re-identification. NLPR-MCT consists of four subsets with at most IDs with to non-overlapping cameras . DukeMTMC is the largest multi-camera multi-object tracking dataset that consists of videos of IDs and cameras in the outdoor scene with both overlapping views and blind spots . All trajectories are manually annotated and identities are associated across cameras. The existing datasets are collected either for single object tracking or multi-camera tracking using static sensors. In this work, we collect a benchmark dataset using multiple drones, which is an effective supplement to the existing datasets.
Iii Multi-Drone Single Object Tracking Dataset
In this section, we present the collected benchmark dataset (MDOT) and evaluation metrics for multi-drone single object tracking.
Iii-a Data Collection
Our dataset is collected by multiple DJI PHANTOM 4Pro drones. Specifically, the drones are controlled by several professional human operators from different altitudes in various outdoor scenes (e.g., park, campus, square, and street), as illustrated in Figure 2. In order to increase the targets’ diversity of appearances and scales, the same target is tracked by multiple drones from different view-angles and different altitudes, ranging from m to m. The dataset has groups of video clips with high resolution frames in two sub-datasets. The two-drone based dataset (Two-MDOT) consists of groups of video clips with high resolution frames taken by two drones, while the three-drone based dataset (Three-MDOT) contains groups of video clips with high resolution frames taken by three drones. Two-MDOT was collected in 2018 while Three-MDOT was collected in 2019. Hence, there is no overlap between Two-MDOT and Three-MDOT. Besides, the dataset is divided into train set ( groups in Two-MDOT and groups of in Three-MDOT) and the test set ( groups in Two-MDOT and groups of in Three-MDOT).
As presented in Table I, most of the previous datasets are collected by one camera where the appearance of targets is not abundant. Although NLPR_MCT and DukeMTMC are used for evaluating multi-target tracking and person re-identification, they are collected by static cameras. In comparison, MDOT can dynamically track the targets with moving drones (see Figure 3). Note that we do not collect the lidar data because several drones with lidar are much expensive compared with visible light camera equipped drones and the accurate sensing distance of lidar is about 200m, which does not show obvious advantage compared with visible light cameras.
For annotation, we collect the images with the size of and use the commonly used annotation tool VATIC  to annotate the location, occlusion and out_of_view information of targets. After that, LabelMe  is used to refine and double-check the annotations frame-by-frame. Moreover, the targets in sequences are divided into categories, i.e., pedestrian, car, carriage, motor, bicycle, tricycle, truck, dog, bus, and the targets in each category are also diverse. Moreover, as shown in Table II, all the sequences are labeled by attributes, i.e., Daytime (DAY), Night (NIGHT), Camera Motion (CM), Partial Occlusion (POC), Full Occlusion (FOC), Out of View (OV), Similar Object (SO), Viewpoint Change (VC) and Illumination Variation(IV), Low Resolution (LR). The statistics with respect to attributes are summarized in Figure 4. Notice that CM, IV and LR occur in most sequences, which may significantly degrade the performance of trackers. Similar to the setting of the classic single object tracking task, we manually annotate the tracking target in the first frame across different drones with respect to the same object.
|DAY||Daytime: the sequence is taken during the daytime.|
|NIGHT||Night: the sequence is taken at night.|
|CM||Camera Motion: abrupt motion of the camera.|
|POC||Partial Occlusion: the target is partially occluded in the sequence.|
|FOC||Full Occlusion: the target is fully occluded in the sequence.|
|OV||Out Of View: some frames of the target leave the view.|
|SO||Similar Object: there are targets of similar shape or same type near the target.|
|VC||Viewpoint Change: viewpoint affects target appearance significantly.|
|IV||Illumination Variation: the illumination in the target region changes.|
|LR||Low Resolution: the frame number of tiny targets (pixels are less than ) is more than .|
Iii-C Evaluation Metrics
Single object tracking is usually evaluated by success and precision plots . However, for multi-drone based tracking, the results of the algorithms should be evaluated upon the multi-drone fusion results. To this end, we propose two new metrics for multi-drone single object tracking, i.e., automatic fusion score (AFS) and ideal fusion score (IFS).
Automatic fusion score evaluates the performance of multi-drone tracker that fuses the tracking results using the online fusing strategy.
Let and be the tracking result (i.e., location, width and height of the box) and ground truth of the -th frame and -th drone. AFS is defined as
where is a evaluation metric for single object tracking (i.e., success and precision scores) and is the weight for the -th drone. and is the number of frames in a video clip and the number of drones, respectively. The value of should be zero or one. is automatically learned and online updated for each frame during the tracking process.
Ideal fusion score is the ideal fusion performance when we assume that the multi-drone system can accurately select the tracking results of the drone with better performance. It is defined to evaluate the extreme performance of a multi-drone tracking system, which can guide the design of a superior multi-drone tracker.
Let and be the tracking result and ground truth of the -th frame and -th drone. IFS is defined as
During the evaluation stage, OPE metrics (Success/Precision) are the traditional metrics for single object tracker. Based on the OPE, AFS and IFS are proposed for multi-drone tracker. Compared with AFS, IFS shows that there is still a gap from the upper bound of multi-drone fusion, which inspires us to design superior multi-drone tracker with more useful fusion strategies.
Iv Agent Sharing Network
The key challenge of multi-drone tracking is how to share the inter-drone information and adaptively fuse the tracking results. To deal with this challenge, each drone is considered as an agent and we propose an agent sharing network (ASNet) for multi-drone tracking, which can effectively exploit the inter-agent complementary information, see Figure 5.
Iv-a Network Architecture
Dynamic siamese network (DSiam)  can enable effective online learning of target appearance variation and background suppression from previous frames. Therefore, we choose DSiam as the base tracker and develop the corresponding multi-drone tracker. A common tracker is trained for all drones in that all drones track the same target in the same scene. Therefore, there is no bias toward different drones. We focus on the online tracking process and design the agent sharing network from three aspects, i.e., template sharing, view-aware fusion and target re-detection.
Let and denote the templates of the first frame and search regions of the -th frame with respect to the -th drone, respectively. By an embedding block , e.g.
, convolutional neural network (CNN), deep features can be extracted for both the templates and search regions,i.e., , and . The key components of Dsiam are the target appearance variation transformation and background suppression transformation . For ASNet, we need to determine the transformation for all drones. The target appearance variation transformation with respect to the -th drone is learned by
where and . is the tracked target of the -th frame for the -th drone. denotes the circular convolution, which can computed rapidly in the frequency domain . can capture the target variation under temporal smoothness assumption and therefore contributes greatly to online learning. Similarly, the background suppression transformation can be learned.
where is the region centering at the target with the same size of . is obtained by multiplying with a Gaussian weight map. can suppress the background information and therefore induces superior tracking performance. More details about the solution of and can be found in .
Compared with visual tracking using single drone, ASNet shares the templates of all drones, and obtains the response maps corresponding to the templates of multiple drones. As the reliability of templates of multiple drones is different, we adaptively fuse the response maps of multiple-templates in a self-supervised manner. Finally, the tracking results of multiple drones can be adaptively fused by tracking scores.
Iv-B Self-supervised Template Sharing
As the appearance of the target may vary greatly, the templates of all drones can be shared to improve the tracker robustness of single drone. The response map of the tracker on the -th drone, corresponding to the template of the -th drone is calculated as
where is the correlation operation, which can be considered as a convolution operation on with as the convolution filter. For the -th drone, we obtain a set of response maps, . To fuse the response maps, we propose a self-supervised fusion strategy. Specifically, we use the tracking results of the previous frames as the supervised information to guide the weights learning of templates. Let denote the tracked target of the tracker on the -th drone using the template of the -th drone. The fusion weights can be learned by
where The weights reflect the correlation between tracked target and target template of the -th frame. Given the learned weights and response maps, we can obtain the fused response map for the tracker on the -th drone, i.e.,
For a multi-drone tracking system with drones, we can obtain fused response maps in total, i.e., .
Iv-C View-aware Fusion
To generate the final results on the multi-drone tracking, we use the auto view-aware fusion scheme when we obtain the fused response maps. For the -th drone’s response map, we search the maximum value in the response map, and obtain its respective location . is defined as tracking score with respect to the -th frame on the -th tracker. Then, we can obtain the index of the best response map by
The respective location in the -th drone is the position of the target. The weight of the drone tracker with the best response map obtained from Eq. 8 is set as one and the rest as zero.
Iv-D Target Re-detection
As camera motion often occurs in the drone based tracking, the target location may vary dramatically in successive frames. To solve the problem, we use the target re-detection strategy based on the past and current frames. For the -th frame, let denote a set of scores for the past frames. and
denote mean and standard deviation of. Inspired by the peak to sidelobe ration in MOSSE , the threshold for target re-detection is defined as
where is a pre-set parameter. The target may be lost when the score is less than or the tracking score is the threshold . If so, we use the local-to-global strategy to expand the search region step by step to re-detect the target . After using the proposed re-detection strategy, the tracking performance is greatly improved.
We evaluate our method compared with recent state-of-the-art single object tracking algorithms on the machine with a E5-2620 v3 CPU and a NVIDIA TITAN Xp GPU. Note that the existing multi-camera tracking methods, are specially designed for multi-object tracking and therefore cannot apply to multi-drone single object tracking directly. The source codes for other algorithms are from the authors.
V-a Overall Performance
We report the success and precision scores, tracking speed and the references of each algorithm in Table III and Table IV. The tracking performance on each drone are reported using the baseline single object trackers. Besides, the overall performance of the baseline trackers is given by calculating success and precision scores of all drones together. Note that for the proposed ASNet, we report success and precision plots using AFS in Definition 1 in terms of multi-drone tracking. As shown in Table III and Table IV, GFSDCF achieves the best precision score of on Two-MDOT and on Three-MDOT. Following the GFSDCF tracker, other correlation filter trackers also obtain the great performance in precision score and success score, e.g., ECO, STRCF, CSRDCF. Besides, due to extensive offline training, siamese tracking approaches DSiam, SiamRPN++ and SiameseFC show the top performance as well. Our proposed ASNet significantly outperforms the baseline trackers on all sub-datasets. Compared with the best baseline tracker, the precision scores are improved by on Two-MDOT and on Three-MDOT, respectively. The results show that compared with tracking using only one drone, multi-drone tracking using our proposed ASNet greatly boost the tracking performance, which validates the necessity and effectiveness of multi-drone tracking. The significant improvement comes from the fusion of complementary information across multi-agents in case of great appearance variations.
To further analyze the performance, we report the overall performance of the proposed ASNet and compared state-of-the-art trackers in Figure 6. Notably, the success and precision plots of ASNet are drawn based on the AFS metric defined in our paper. ASNet chieves the best performance on the proposed dataset, i.e., success score on the Two-MDOT subset and success score on the Three-MDOT subset. This is because of the fusion of complementary information across multi-agents in case of great appearance variations in our framework. We can conclude that SiamRPN++  and GFSDCF  rank the second and third place compared with other methods in terms of success score, respectively. Following the above three trackers, the siamese network based tracker DSiam  and correlation filters based tracker ECO  obtain slight inferior performance in both success and precision scores.
|Algorithms||Re-detection||Template Sharing||View-aware Fusion||Two-MDOT||Three-MDOT|
|(3) Template Sharing||38.2||60.3||43.5||65.4|
|(4) ASNet w/o VF||39.6||62.9||44.7||67.0|
|(5) View-aware Fusion||46.3||71.6||52.0||75.2|
|(6) ASNet w/o RD||47.0||72.7||52.1||76.0|
|(7) ASNet w/o TS||47.6||73.4||52.7||76.6|
V-B Attribute-based Performance
To further analyze the performance, we report the success scores of algorithms over attributes in Figure 8. It can be concluded that the performances on attributes CM, FOC, VC and IV are inferior than that on other attributes. This is maybe because the target appearance is heavily changed in these situations. We can observe that the performance of DSiam on attributes CM, SO, VC are far ahead of other methods. For other attributes, DSiam achieves the best performance on the test set, while the gap between DSiam and the best competitor ECO on the other UAV based datasets is small. Moreover, our proposed tracker ASNet achieves the best performance on all attributes. There is a significant gap between ASNet and other compared trackers, which owes to the discriminative appearance information from template sharing and re-detection strategy.
Figure 9 shows the precision plots of compared tracking algorithms over challenging visual attributes. It can be concluded that the performances on attributes CM, FOC, VC and IV are inferior than that on other attributes. This is maybe because the target appearance is heavily changed in these situations. Moreover, our proposed tracker ASNet achieves the best performance on all attributes. There is a significant gap between ASNet and other compared trackers, which is attributed to the discriminative appearance representation from template sharing and re-detection strategy in our method. Following our method, DSiam, SiamRPN++ and GFSDCF achieve good performance in most attributes, much better than the remain compared tracking methods.
V-C Ideal Fusion Score
To investigate the ideal performance of a multi-drone tracking system, as shown in Figure 7, we report the success and precision plots of the existing trackers with IFS metric on MDOT test set. Note that there is a big difference in the ideal fusion performance of different trackers. Similar to ensemble learning techniques, IFS is up to the performance of the base tracker on each drone. If the tracking results of the baseline tracker, e.g., DSiam, can be ideally fused with the precision of on Two-MDOT and on Three-MDOT, respectively. For any single object tracker, IFS can be used to guide the design of a multi-drone tracker based on the base tracker. As the tracking mechanism of the base trackers are different, more generalized or base tracker-specific fusion strategies are expected to be designed by the research community to boost the performance of multi-drone tracking.
V-D Ablation Study
As shown in Table V, we analyze the importance of each component in ASNet on our MDOT dataset, i.e., re-detection, template sharing and view-aware fusion. Specifically, we use DSiam as the base tracker of ASNet and add re-detection, template sharing, view-aware fusion.
We first add the re-detection component in the DSiam tracker. In this module, the hyperparameteris set to in Two-MDOT and in Three-MDOT. As shown in Table V (2), the re-detection module can improve precision score of and success score of on Two-MDOT and Three-MDOT respectively. It indicates that the re-detection module can decrease the possibility of target drifting especially for long-term tracking.
Template Sharing. Table V (3) shows the results of template sharing based on the DSiam tracker. It brings additional improvements for both precision score () and success score (). When we combine target re-detection and template sharing, the precision score and success score are further improved.
View-aware Fusion. As presented in Table V (5), if we only apply the view-aware fusion in the DSiam tracker, the performance is greatly improved. Then we take the view-aware fusion strategy into account, consistent improvements can be achieved on both drones, as shown in Table V (6, 7). Finally, we add the re-detection module, template sharing design and view-aware fusion scheme to the DSiam tracker (ASNet). Thus we can observe considerable improvement in precision score () and success score ().
As there exists exchange of information across drones, we need to take the impact of synchronization and latency into account. Similar to the setting of multi-camera tasks, we assume that videos of multiple drones are synchronous by starting tracking across drones simultaneously. Re-detection is conducted on each drone separately while view-aware fusion only needs a tracking score value per drone. Hence, the only component of ASNet, i.e., template Sharing, is affected by communication latency. For ASNet, even if the template sharing strategy is not adopted, we can still get a superior performance, as shown in Table V (7). In real-world applications, with the development of communication network technology, if the communication latency can be ignored, we can exploit the sharing of vision information across drones more effectively.
In this paper, we present a new multi-drone single-object tracking (MDOT) benchmark dataset for the object tracking community. MDOT is an unique platform for developing drone based tracking algorithms and multi-drone tracking systems. Moreover, Two evaluation metrics, i.e., adaptive fusion score (AFS) and ideal fusion score (IFS) are proposed for multi-drone single object tracking. To exploit the complementary information across drones, an agent sharing network (ASNet) is proposed by sharing inter-drone templates, fusing multi-drone tracking results and re-detecting the targets. Extensive experiments on MDOT show that ASNet outperforms the state-of-the-art single object trackers, which validates the effectiveness of multi-drone tracking.
-  (2011) Multiple object tracking using k-shortest paths optimization. TPAMI 33 (9), pp. 1806–1819. Cited by: §I.
-  (2016-06) Staple: complementary learners for real-time tracking. In CVPR, Cited by: TABLE III, TABLE IV.
-  (2016) Fully-convolutional siamese networks for object tracking. In ECCV, pp. 850–865. Cited by: §II-A, TABLE III, TABLE IV.
-  (2010) Visual object tracking using adaptive correlation filters. In CVPR, pp. 2544–2550. Cited by: §IV-D.
-  (2014) Exploring context information for inter-camera multiple target tracking. In WACV, pp. 761–768. Cited by: §I.
-  (2014) A novel solution for multi-camera object tracking. In ICIP, pp. 2329–2333. Cited by: §I.
-  (2016) An equalised global graphical model-based approach for multi-camera object tracking. TCSVT. Cited by: §II-C, TABLE I.
-  (2017) An equalized global graph model-based approach for multicamera object tracking. T-CSVT 27 (11), pp. 2367–2381. Cited by: §I.
-  (2017) Integrating social grouping for multitarget tracking across cameras in a crf model. TCSVT 27 (11), pp. 2382–2394. Cited by: §I, §II-B.
-  (2018) Context-aware deep feature compression for high-speed visual tracking. In CVPR, pp. 479–488. Cited by: TABLE III, TABLE IV.
-  (2016) Visual tracking using attention-modulated disintegration and integration. In CVPR, pp. 4321–4330. Cited by: TABLE III, TABLE IV.
-  (2000) Real-time tracking of non-rigid objects using mean shift. In CVPR, Vol. 2, pp. 142–149. Cited by: §II-A.
-  (2017) Eco: efficient convolution operators for tracking. In CVPR, pp. 6638–6646. Cited by: §V-A, TABLE III, TABLE IV.
-  (2014) Accurate scale estimation for robust visual tracking. In BMVC, Cited by: TABLE III, TABLE IV.
-  (2017) Discriminative scale space tracking. T-PAMI 39 (8), pp. 1561–1575. Cited by: TABLE III, TABLE IV.
-  (2015) Learning spatially regularized correlation filters for visual tracking. In ICCV, pp. 4310–4318. Cited by: §II-A, TABLE III, TABLE IV.
-  (2018) The unmanned aerial vehicle benchmark: object detection and tracking. In ECCV, pp. 370–386. Cited by: §I, §I, §II-C, TABLE I.
-  (2019) LaSOT: A high-quality benchmark for large-scale single object tracking. In CVPR, pp. 5374–5383. Cited by: TABLE I.
-  (2019) Lasot: a high-quality benchmark for large-scale single object tracking. In , pp. 5374–5383. Cited by: §II-C.
-  (2017) Parallel tracking and verifying: a framework for real-time and high accuracy visual tracking. In ICCV, pp. 5486–5494. Cited by: TABLE III, TABLE IV.
-  (2019) Siamese cascaded region proposal networks for real-time visual tracking. In CVPR, Cited by: §II-A.
-  (2017) Learning dynamic siamese network for visual object tracking. In ICCV, pp. 1763–1771. Cited by: §IV-A, §IV-A, §V-A, TABLE III, TABLE IV.
-  (2016) Struck: structured output tracking with kernels. T-PAMI 38 (10), pp. 2096–2109. Cited by: §II-A.
-  (2015) High-speed tracking with kernelized correlation filters. TPAMI 37 (3), pp. 583–596. Cited by: §II-A, TABLE III, TABLE IV.
-  (2016) A unified approach for multi-object triangulation, tracking and camera calibration. TSP 64 (11), pp. 2934–2948. Cited by: §I.
-  (2012) Tracking-learning-detection. T-PAMI 34 (7), pp. 1409–1422. Cited by: §II-A.
-  (2017) Need for speed: a benchmark for higher frame rate object tracking. In ICCV, pp. 1125–1134. Cited by: TABLE I.
-  (2017) Learning background-aware correlation filters for visual tracking. In ICCV, pp. 1135–1143. Cited by: §II-A, TABLE III, TABLE IV.
-  (2016) The visual object tracking VOT2016 challenge results. In ECCVW, pp. 777–823. Cited by: §II-C, TABLE I.
-  (2018) Online-learning-based human tracking across non-overlapping cameras. TCSVT 28 (10), pp. 2870–2883. Cited by: §I, §II-B.
-  (2016) NUS-PRO: A new visual tracking challenge. T-PAMI 38 (2), pp. 335–349. Cited by: §II-C.
-  (2019) SiamRPN++: evolution of siamese visual tracking with very deep networks. In CVPR, Cited by: §II-A, §V-A, TABLE III, TABLE IV.
-  (2018) Learning spatial-temporal regularized correlation filters for visual tracking. In CVPR, pp. 4904–4913. Cited by: §II-A, TABLE III, TABLE IV.
-  (2015) Encoding color information for visual tracking: algorithms and benchmark. TIP 24 (12), pp. 5630–5644. Cited by: §II-C, TABLE I.
-  (2018) Planar object tracking in the wild: A benchmark. In ICRA, Cited by: TABLE I.
-  (2017) Discriminative correlation filter with channel and spatial reliability. In CVPR, pp. 6309–6318. Cited by: §II-A, TABLE III, TABLE IV.
-  (2015) Hierarchical convolutional features for visual tracking. In ICCV, pp. 3074–3082. Cited by: TABLE III, TABLE IV.
-  (2015) Long-term correlation tracking. In CVPR, pp. 5388–5396. Cited by: §II-A, TABLE III, TABLE IV.
-  (2016) A benchmark and simulator for UAV tracking. In ECCV, pp. 445–461. Cited by: §I, §I, §II-C, TABLE I.
-  (2017) Context-aware correlation filter tracking. In CVPR, pp. 1396–1404. Cited by: §II-A, TABLE III, TABLE IV.
-  (2018) TrackingNet: a large-scale dataset and benchmark for object tracking in the wild. In ECCV, Cited by: §II-C.
-  (2016) Learning multi-domain convolutional neural networks for visual tracking. In CVPR, pp. 4293–4302. Cited by: §II-A.
-  (2017) Person re-identification for improved multi-person multi-camera tracking by continuous entity association. In CVPRW, pp. 64–70. Cited by: §I, §II-B.
-  (2016) Performance measures and a data set for multi-target, multi-camera tracking. In ECCV, pp. 17–35. Cited by: §I, §II-B, §II-C, TABLE I.
-  (2018) Features for multi-target multi-camera tracking and re-identification. In CVPR, pp. 6036–6046. Cited by: §I, §II-B.
-  (2016) Learning social etiquette: human trajectory understanding in crowded scenes. In ECCV, pp. 549–565. Cited by: §I, §II-C.
-  (2008) LabelMe: a database and web-based tool for image annotation. IJCV 77 (1-3), pp. 157–173. Cited by: §III-B.
-  (2014) Visual tracking: an experimental survey. T-PAMI 36 (7), pp. 1442–1468. Cited by: TABLE I.
-  (2017) End-to-end representation learning for correlation filter based tracking. In CVPR, pp. 2805–2813. Cited by: TABLE III, TABLE IV.
-  (2013) Efficiently scaling up crowdsourced video annotation - A set of best practices for high quality, economical video labeling. IJCV 101 (1), pp. 184–204. Cited by: §III-B.
-  (2015) Object tracking benchmark. T-PAMI 37 (9), pp. 1834–1848. Cited by: §II-C, TABLE I, §III-C.
Joint group feature selection and discriminative filter learning for robust visual object tracking. In ICCV, pp. 7950–7960. Cited by: §II-A, §V-A, TABLE III, TABLE IV.
-  (2015) A camera network tracking (camnet) dataset and performance baseline. In WACV, pp. 365–372. Cited by: §I.
-  (2018) Vision meets drones: A challenge. CoRR abs/1804.07437. Cited by: §I, §I, §II-C, TABLE I.
-  (2018) Distractor-aware siamese networks for visual object tracking. In ECCV, pp. 101–117. Cited by: §II-A, §IV-D.