Real-Time Correlation Tracking via Joint Model Compression and Transfer

07/23/2019 ∙ by Ning Wang, et al. ∙ USTC 3

Correlation filters (CF) have received considerable attention in visual tracking because of their computational efficiency. Leveraging deep features via off-the-shelf CNN models (e.g., VGG), CF trackers achieve state-of-the-art performance while consuming a large number of computing resources. This limits deep CF trackers to be deployed to many mobile platforms on which only a single-core CPU is available. In this paper, we propose to jointly compress and transfer off-the-shelf CNN models within a knowledge distillation framework. We formulate a CNN model pretrained from the image classification task as a teacher network, and distill this teacher network into a lightweight student network as the feature extractor to speed up CF trackers. In the distillation process, we propose a fidelity loss to enable the student network to maintain the representation capability of the teacher network. Meanwhile, we design a tracking loss to adapt the objective of the student network from object recognition to visual tracking. The distillation process is performed offline on multiple layers and adaptively updates the student network using a background-aware online learning scheme. Extensive experiments on five challenging datasets demonstrate that the lightweight student network accelerates the speed of state-of-the-art deep CF trackers to real-time on a single-core CPU while maintaining almost the same tracking accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

There has been an increasing demand for visual object tracking algorithms in numerous vision applications. Typical examples include video surveillance, human-computer interaction, and autonomous driving. As a key component, tracking target objects in real-time plays a critical role in improving the overall efficiency of vision applications. The visual tracking framework based on Correlation Filters (CF) has been widely investigated in [1, 2, 3] because of the efficient correlation computation in the Fourier domain. When integrated with CNN features, CF trackers [4, 5, 6] achieve state-of-the-art tracking accuracy. However, extracting high-dimensional deep features brings in a huge computational cost and limits CF trackers to achieve real-time speed. Although deep operations can be always accelerated by GPUs, deep CF trackers cannot be deployed on CPU-only devices, e.g., most intelligent mobile phones do not have GPUs. Let alone the huge power consumption and memory storage required by existing pretrained CNN models (e.g., VGG [7]). The challenges of using off-the-shelf CNN models (e.g., VGG [7]) include huge demand for memory storage, heavy computational burden, and high power consumption. It is therefore non-trivial to investigate how to accelerate deep CF trackers on a CPU platform to achieve real-time speed without suffering a significant drop in tracking accuracy.

Fig. 1: Tracking results on the OTB-2015 dataset [8]. The proposed method accelerates state-of-the-art deep CF trackers (i.e., ECO [5] and DeepSTRCF [9]) through joint CNN model compression and transfer. The improved CF trackers (i.e., fECO and fDeepSTRCF) perform favorably against existing methods and achieves real-time speed (more than 20 FPS) on a single-core CPU. It is worth mentioning that most existing real-time trackers as shown on the left cannot achieve real-time speed on a CPU, while the recent performance leaders shown on the right are far from real-time even on a GPU.

In this paper, we jointly compress and transfer off-the-shelf CNN models into a lightweight feature extractor. The lightweight feature extractor enables deep CF trackers to achieve real-time speed as well as consume less memory. The model compression and transfer are from the perspective of knowledge distillation [10, 11]. We take the off-the-shelf model as a teacher network, which is pretrained for the object recognition task. On the other side, a low capacity student network is used to learn from the teacher network. In the distillation process, we propose two types of losses. The first one is a fidelity loss and the second one is a correlation tracking loss. The fidelity loss ensures the student network to convey the representation from the teacher network, while the correlation tracking loss transfers the objective of the student network from object recognition to visual tracking. we take the hierarchies of deep models into account and perform the distillation process on multiple CNN layers offline. After distillation, the student network maintains the high-level semantic discrimination from the fidelity loss. Besides, the tracking loss helps the student network to produce target-specific CNN representations. During online tracking, we propose a background-aware adaptation method to update the student network for further performance improvement.

The student network is a lightweight feature extraction backbone. The model size of the student network is only 1.5 MB while the original size of the teacher network is 95 MB (i.e.,

smaller). When integrated with the proposed lightweight backbone, the state-of-the-art deep CF trackers including ECO [5] and DeepSTRCF [9] are able to achieve real-time speed on a single-core CPU while maintaining almost the same tracking accuracy on prevalent tracking benchmarks.

We summarize the contributions of our work as follows:

  • We compress and adapt off-the-shelf deep CNN models into lightweight backbones by knowledge distillation. We propose a fidelity loss and a correlation tracking loss to jointly compress the network and transfer its objective from object recognition to visual tracking.

  • We propose to distillate student network via hierarchical CNN representations offline. We propose a background-aware adaption method to incrementally fine-tune the student network to adapt to target appearance changes.

  • We integrate the proposed lightweight backbone into the state-of-the-art deep CF trackers [5, 9]. Evaluations on the large-scale benchmark datasets indicate the effectiveness of the proposed method in terms of the real-time speed and tracking accuracy.

In the following of the paper, we describe the related work in Section II, correlation tracking in Section III, the proposed approach in Section IV, and experiments in Section V. Finally, we conclude the paper in Section VI.

Ii Related Work

In this section, we briefly survey the closely related literature on three aspects: tracking by correlation filters, real-time tracking, and network compression.

Ii-a Correlation Tracking

Correlation filters have been widely studied in visual tracking since the MOSSE method [1] was proposed by Bolme et al.

in 2010. The correlation filter is trained by minimizing a ridge regression loss for all circular shifts of the training sample, which can be efficiently solved in the Fourier domain

[12]. Heriques et al. exploited the circulant structure of training patches in the kernel space [2]. The SRDCF tracker [13] alleviates the boundary effects by penalizing correlation filter coefficients depending on spatial locations. The CSR-DCF algorithm [14] constructs filters with channel and spatial reliability. The C-COT [15] adopts a continuous-domain formulation and is further improved by an efficient convolution operator (ECO [5]). The recent DRT tracker [16] jointly learns the discrimination and reliability of CF. In addition, multiple kernels [17], combination with particle filter [18], re-detection mechanism for long-term scenario [19, 20] and ensemble learning schemes [21, 22, 23] have also been investigated in the CF family. In recent years, the combination of CF trackers and deep features from off-the-shelf CNN models has demonstrated impressive results [4, 24, 5]. Even though state-of-the-art results can be obtained by leveraging deep feature representations, the characteristic real-time efficiency of the correlation filter has gradually faded due to the adopted heavyweight CNN model. In this work, different from the above approaches putting emphasis on learning more discriminative filters, we focus on learning a distilled lightweight backbone network that enables high-performance real-time correlation tracking even on a single-core CPU.

Ii-B Real-time Tracking

The Siamese network has been widely studied for real-time tracking. The fully convolutional Siamese Network (SiamFC) regards the tracking task as a similarity learning problem, and compares the template patch with the candidate patches in the search patch in a sliding-window manner. On the basis of the SiamFC framework [25], the correlation layer [26], attention mechanism [27], semantic branch [28]

and unsupervised learning scheme

[29] are widely explored. The recent region proposal Siamese network [30, 31] achieves higher speed compared with SiamFC [25]

by discarding multiple-scale estimation. However, the Siamese networks heavily rely on powerful GPUs and the running speed on CPU is only 2

3 FPS [32] due to heavyweight model complexity.

On the other hand, CF trackers can achieve real-time speed when using lightweight hand-crafted features such as HOG and ColorNames [2, 33, 9, 22], but they typically have an obvious performance gap with the remarkable deep CF trackers. Equipped with CNN features, CF trackers achieve state-of-the-art tracking accuracy but suffer from a large computational cost. Methods of feature dimension reduction, such as PCA [34], factorized convolution operator [5], and encoder network [35], can reduce the feature complexity to some extent. However, these methods have to first extract high-dimensional CNN features from the original heavyweight deep models. In contrast, our method produces a lightweight network offline for efficient feature extraction, which not only naturally reduces the feature dimension but also greatly saves the feature extraction time.

Fig. 2: Pipeline of knowledge distillation and online prediction. We learn to offline compress the teacher network by using the proposed fidelity loss and correlation tracking loss. In the online stage, the distilled student network adapts to the target object of each input video sequence and helps track the target object real-time on a single-core CPU.

Ii-C Network Compression

There are two typical network compression approaches involving model pruning and knowledge distillation. Model pruning [36] usually removes unimportant filter weights and utilizes online fine-tuning to recover accuracy. Knowledge distillation [10, 11] is based on the observation that a small network has similar representation capability as a large network but is usually harder to train solely [37]. Knowledge distillation [10, 11] uses a powerful teacher network to guide a smaller student network. The student is forced to mimic the feature representation [11, 37]

or classification probabilities

[10] of its teacher. However, previous methods usually compress models directly on the same vision task (e.g., image classification). In contrast, our method not only compresses the deep models but also transfers the objective to the tracking task. Therefore, the distillation and tracking processes are jointly optimized in an end-to-end manner. Unlike existing methods that usually compress the model by or with limited speed acceleration [11, 10], by virtue of collaborative training, we achieve a much larger compression rate of while maintaining almost the same tracking accuracy. In [38], the classic knowledge distillation scheme is used to compress off-the-shelf CNN networks in the tracking framework. We note that how to bridge the gap between object recognition and visual tracking is not fully explored. In this work, we propose to simultaneously distill the pretrained networks and narrow the task gap, which helps our method to achieve a much higher compression rate and a real-time speed on CPU. To further reduce the model degradation caused by compression, we propose multiple-level knowledge transfer and employ a background-aware online adaption scheme to fine-tune the student network for each sequence.

Iii Revisiting Correlation Tracking

A typical CF based tracker [1, 2] is trained using an image patch centered around the target. All of the circular shifts of the target patch are generated as training samples with Gaussian function labels. Considering the feature embedding , the filter can be trained by minimizing the following regularized regression objective:

(1)

where is a regularization parameter, is the number of feature channel, denotes the circular correlation and is the desired Gaussian label. The correlation filter on the -th () channel can be efficiently learned as follows:

(2)

where is the element-wise product, hat notation

denotes the Discrete Fourier Transform (DFT) and

is the complex-conjugate operation.

In the next frame, a search patch with the same size of is cropped out for predicting the target position, and the corresponding response is computed by

(3)

where is the inverse DFT. Since a higher feature dimension implies a larger computation burden, a lightweight feature backbone network not only accelerates feature extraction but also expedites the correlation filter learning (Eq. 2) and detection (Eq. 3) processes.

In this work, we aim to train a lightweight backbone network for efficient correlation tracking. To verify the effectiveness and generality, we select a baseline and two state-of-the-art CF frameworks as follows:

  • KCF [2] (in TPAMI 2015) is a plain CF tracker without bells and whistles. We use it to verify the feature representation capability between the teacher and student networks.

  • ECO [5] (in CVPR 2017) is based on the C-COT [15] tracker and integrates several efficient strategies. ECO adopts the features from VGG-M. We develop the fast version (fECO) using our distilled lightweight model.

  • STRCF [9] (in CVPR 2018) is a CF tracker with a Spatial-Temporal Regularization. STRCF shows impressive performance with hand-crafted features, and DeepSTRCF using VGG-M achieves further improvement but the speed is greatly limited. We implement a fast version, namely fDeepSTRCF, using our model.

Iv Proposed Method

Figure 2 shows an overview of our framework involving offline knowledge distillation and online prediction. In the offline knowledge distillation step, we use two teacher networks and two shared-weight student networks. The VGG-M [39] is selected as the teacher network, which is widely used in deep CF trackers [40, 15, 5, 9, 16]. We first randomly prune the teacher network to initialize the student network. Specifically, for one convolutional layer of the teacher network, we randomly prune 7/8 filters in the current layer and the corresponding 7/8 channels in each filter of the next convolutional layer. The student networks aim to produce similar feature representations of the teacher networks while reducing around 63 times of the model storage. Figure 3 shows the detailed architectures of the student and teacher networks where the filter capacity of the student network is 64 times smaller than that of the teacher network in each layer except in the first layer. As a result, the teacher network without fully connected layers is 95 MB while our lightweight model is only 1.5 MB. We denote the distilled student network as CF-VGG.

In the following, we first introduce how to offline compress and transfer deep models for efficient tracking in Section IV-A. Then we present the efficient online adaptation scheme in Section IV-B.

Iv-a Joint Model Transfer and Compression

In the offline training step, we propose two types of losses to simultaneously compress and transfer the teacher network: 1) The fidelity loss ensures the same feature representation capability between the student and teacher networks. 2) The correlation tracking loss transfers the source objective of classification into the target objective of regression for tracking. The fidelity loss mainly maintains the semantic description in high levels, while the tracking loss learns the similarity (or template matching) to evaluate the minor appearance changes of target objects between frames. By joint training, semantic features can complement the appearance features. These two losses constitute the final objective function, which is formulated as:

(4)

where is a hyper-parameter balancing the influences of these two losses, denotes the learnable parameters of the student network and the last term is the weight decay. In the following, we present the details of the semantic fidelity loss and the correlation tracking loss .

Fig. 3: Architecture comparison between the teacher and student networks. The numbers in each convolutional layer indicate the filter number, filter channel, filter width and height, respectively. Notice that the student network reduces 8 times of both filter numbers and channels, which takes around 64 times smaller than the teacher network.

Iv-A1 Semantic Fidelity Loss

Once we initialize the student network by filter pruning, the feature dimensions of the student and teacher networks are different. We use a 11 fully convolutional operation to match their feature dimension. Given a target patch and a search patch , the features from the student network and the teacher network should be as similar as possible. We propose a fidelity loss to measure the feature differences. Formally, we define the fidelity loss as:

(5)

where represents the trainable feature embedding of the student network (its notation is omitted for clarity), and is the fixed embedding of the teacher network.

Iv-A2 Correlation Tracking Loss

In addition to the fidelity loss, we propose the correlation tracking loss to modify the objective of the student network from classification to regression. We feed the search patch and the target patch into the student network to obtain their features and use a CF to model the response map regression. The circular correlation can be computed in the Fourier domain with a closed-form solution [2, 33]

and the backward formulas can also be efficiently derived. The corresponding loss function is the

distance between the correlation response map and the groundtruth label as follows:

(6)

where is a Gaussian map centered at the annotated target location. For clarity, in comparison with Eq. 2, we omit the feature dimension in Eq. 6 and the subsequent equations. The back-propagation of the above loss with respect to and are given by Eq. 7 below. Interested readers can refer to [26, 41] for more details.

(7)
Fig. 4: Existing deep CF trackers learn correlation filters directly on multi-layer CNN features (left). Our framework aims at distilling a lightweight CNN backbone using back propagation to fine-tune multiple layers (right).

Furthermore, we perform the model transfer with multiple-level feature representations. Unlike existing deep CF trackers [4, 24, 15, 5] that simply integrate multiple CNN layers with empirical or learnable weights to boost the performance (see Figure 4), we separately apply the trainable constraint to multiple CNN layers to fine-tune the student network. This helps the student network not only fit the correlation tracking task better but also maintain a richer representation capability than only using the features from the last CNN layer (see more experiments in Section V-B). In this work, we take the first, second and last convolutional layers before their pooling operations as the low, middle and high-level feature representations, respectively. The final tracking loss is formulated as:

(8)

where means the index of the feature representation level. contains the groundtruth labels, which are all Gaussian maps but with different spatial sizes. denotes the feature embedding of the student network on the -th level. The CFs with different levels of features (i.e., ) are learned using Eq. 2.

Iv-B Background-Aware Online Adaptation

The offline distillation decreases the network capacity while preserving the feature representation. In the tracking scenarios, objects belonging to the same category may be labeled differently according to the first frame annotations. Figure 5 shows an example where only one athlete is positively labeled while the remaining are labeled as negative. In order to increase the feature discrimination, we online fine-tune the student network using the annotations in the first frame. Our idea is motivated by the context-aware correlation filter (CACF) [42] that regresses hard negative samples to the negative labels. These hard negative samples do not overlap with the target object. In CACF [42], the context-aware information is learned through:

(9)

where is the positive training sample including the target and collects the negative samples that do not overlap with the target region. Given pretrained deep features, the CACF method enhances the filter-level discriminative capability. However, in our work, we explore the background-aware information in model training to boost the feature-level representation.

Fig. 5: Illustration of sample generation for background-aware online adaptation. Given the first frame, the template x is cropped centered at the target position. The foreground patches contain the target. We augment the foreground patches for training. The background patches do not include the target and their corresponding labels are set to zero.

In the offline distillation step, all the training samples contain the target object, which helps discriminate the target from the background in a limited neighborhood. During online fine-tuning, we incorporate more negative samples to help the student network better distinguish the target from the background where hard negative objects may exist. To this end, we crop both positive and negative samples online, as shown in Figure 5. For positive samples, we augment them through randomly flipping, shifting, increasing blur, and changing the illumination. Finally, the target patch is fed into the template branch. Positive and negative search samples are fed into the search branch in Figure 2. For online adaptation, we jointly exploit the multi-level transfer and background-aware formulation. The online tracking loss is as follows:

(10)

where and on the label and search patch denote the positive and negative annotations, respectively. As for the fidelity loss, since we still want the student network to mimic its teacher on the current video, is kept the same as in Eq. 5. The online fine-tuning is only performed on the initial frame and its loss is given as follows:

(11)

Iv-C Efficient Online Correlation Tracking

After we have the distilled lightweight backbone, we remove the additionally added 11 convolutional kernel, and take the output of the remaining convolutional layers to facilitate existing CF frameworks for online tracking. We select three representative methods (i.e., KCF [2], ECO [5], and STRCF [9]) as introduced in Section III.

V Experiments

In this section, we first illustrate the implementation details and the evaluation configurations. Then we conduct an ablation study to demonstrate the effectiveness of our method. Finally, we compare with state-of-the-art trackers.

V-a Experimental Details

Implementation Details.

We use the videos for object detection from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2015)

[43]

dataset to offline distill the student network. During training, we use the stochastic gradient descent (SGD) solver and set the momentum and weight decay as 0.9 and 0.005, respectively. We train the network for 50 epochs with a learning rate exponentially decreased from

to . The multi-task weighting parameter in Eq. 4 and Eq. 11 is set to . In the online adaptation stage, we fine-tune the student network for only 8 iterations using the samples from the first frame. In each iteration, we crop 32 positive and negative samples as shown in Figure 5. We implement our method using MatConvNet [44] on a PC with a 4GHz CPU and an Nvidia GTX 1080TI GPU. The source code will be available at: https://github.com/594422814/CF-VGG.git

Benchmarks and Evaluation Metrics.

We evaluate our tracker on the OTB-2013 [45], OTB-2015 [8], and Temple-Color [46] datasets, which contain 50, 100 and 128 challenging videos, respectively. We report the overlap success plots on these datasets using one-pass evaluation (OPE) [45, 8] and take the area-under-curve (AUC) scores to evaluate the performance. In addition, we evaluate our tracker on the VOT-2016 [47] and VOT-2017 [48] datasets. The performance is measured by two independent metrics: accuracy (average overlap during successful tracking) and robustness (reset rate).

V-B Ablation Study

We evaluate the effectiveness of the components of the proposed algorithm in terms of computational efficiency, tracking accuracy, and model representation capability.

Efficiency.

Table I compares the efficiency and model size of our CF-VGG with the original teacher network VGG-M. These two networks are integrated into the state-of-the-art CF trackers ECO and DeepSTRCF. We observe that it takes around 76 ms for the VGG-M network to extract features on the CPU, which is 8 times slower than that using our distilled CF-VGG network. The distilled deep features accelerate the ECO and DeepSTRCF trackers and are more than 5 times faster on the CPU. The improved fECO and fDeepSTRCF trackers take 27 FPS and 20 FPS vs. their original speed 5 FPS and 3 FPS, respectively.

In addition to the comparison with CF trackers using VGG-M, we further analyze some other representative real-time trackers. Figure 6 shows the comparison results of some widely adopted backbones on FLOPs metric (only feature extraction part). The number of float-point operations (FLOPs) of the convolutional layer is calculated as follows,

(12)

where is the input feature map channel, is the kernel width (assumed to be symmetric), +1 means the computation of bias operation, and , and are the height, width and channel number of the output feature maps, respectively. In Table II, we exhibit the feature map sizes and feature channels of different backbone networks. The AlexNet backbone is typically used in Siamese trackers [25, 28] and the VGG-M network is widely adopted in classification based trackers [49, 50, 51] and CF trackers [5, 9]. After computing the FLOPs of different backbone networks via Eq. 12, we can observe that our tiny model is extremely efficient than modern off-the-shelf models. The FLOPs of the feature extractor in SiamFC and ECO are and while ours is only , as shown in Figure 6.

Backbone Model Model CPU Feature CPU GPU
Model Size FLOPs Extraction FPS FPS
 ECO [5] VGG-M [39] 95 MB 1.82 B 76 ms 5 9
 fECO CF-VGG 1.5 MB 0.048 B 9 ms 27 48
 DeepSTRCF [9] VGG-M [39] 95 MB 1.82 B 76 ms 3 5
 fDeepSTRCF CF-VGG 1.5 MB 0.048 B 9 ms 20 35
TABLE I: Computation comparison between the ECO/DeepSTRCF trackers and our improved versions on the OTB-2013 dataset. We use float-point operations (FLOPs) of convolution operation to measure the computational complexity, where B indicates billion. In practice, the actual speedup ratio is much slower than FLOPs.
Fig. 6: Model computation complexity comparison. Our proposed lightweight CF-VGG produces much fewer FLOPs, which guarantees the CPU real-time correlation tracking.
AlexNet (SiamFC [25]) VGG-M (ECO [5]) CF-VGG (fECO)
 Input
 Conv1
 Pool1
 Conv2
 Pool2
 Conv3
 Conv4
 Conv5
TABLE II: Comparison of the feature map sizes and feature channels of diferent networks including AlexNet [52], VGG-M [39] and our CF-VGG.

The Siamese trackers [25, 53, 28, 54] adopt AlexNet-like [52] fully-convolutional networks to predict target location in an end-to-end manner. Their tracking speed can be significantly accelerated to over 80 FPS by a powerful GPU because the fully-convolutional structure adequately exploits the GPU device. However, on a single CPU, the Siamese trackers are unlikely to achieve real-time performance [32], whereas our improved CF trackers can. The recent real-time MDNet tracker [55] modifies the first three convolutional layers of VGG-M and uses ROI Align for efficient binary classification. However, its backbone network still produces high FLOPs and the further online fine-tune prevents its CPU real-time performance. For CF trackers, only the deep feature extraction process benefits from GPU and the tracking part just uses CPU even without optimization. Besides, ECO [5] and STRCF [9] methods use a time-consuming alternating direction method of multipliers (ADMM) or Conjugate Gradient (CG) for online algorithm optimization. Thus, existing speed comparison that does not distinguish CPU and GPU environments is not very fair. With only CPU, the Siamese tracker (e.g., SiamFC) of more than 80 FPS cannot achieve real-time speed [32] but ours can make it, which already proves the efficiency of CF trackers using our tiny model.

Compression Ratio.

In this work, we compress the off-the-shelf model by an extremely high ratio of about 63. In Table III, we evaluate the performance, model size and CPU speed under different network compression ratios. The 1 compression ratio means that the network is not pruned, but still fine-tuned by the fidelity loss and correlation tracking loss. It slightly outperforms the teacher model, which shows that our joint training scheme is effective and the tracking loss slightly fine-tunes the uncompressed model. To achieve CPU real-time speed, we choose the compression ratio of 64. Except for the better efficiency, by adopting our lightweight model, the required storage room is also greatly saved (our 1.5 MB vs. original 95 MB).

 Compression Ratio Baseline 1 16 32 64 96
 Model Size (MB) 95 MB 95 MB 5.8 MB 2.9 MB 1.5 MB 0.99 MB
 AUC Socre (%) 69.4 69.6 69.0 68.5 68.2 66.9
 CPU Speed (FPS) 5 5 9 16 27 35
TABLE III: Performance and speed analysis on different compression ratios. The baseline tracker is ECO, and is evaluated on the OTB-2015 benchmark [8] using AUC metric. To achieve both satisfying CPU real-time efficiency and performance, we choose the compression rate of 64.

Tracking Accuracy.

In Table IV, we show the performance evaluation results using different configurations to distill the student network. To obtain a tiny model, an optional choice is directly training a tiny CF-VGG from scratch using classification loss following VGG-M, but its performance is unsatisfied since it may not suit the tracking task. In contrast, we propose to jointly compress and transfer a teacher network. When using only the fidelity loss (i.e., shown as “only fidelity loss”), the tracking accuracy decreases by 45% for ECO. Meanwhile, using only tracking loss decreases the accuracy by 34% as well. However, equipped with both the fidelity and tracking losses, we significantly improve the performance, which means the high-level semantic features can complement the multi-level appearance features trained via tracking loss. When integrated into DeepSTRCF, we find that the improved fDeepSTRCF tracker achieves higher accuracy on both the OTB-2013 and OTB-2015 datasets. Our performance slightly decreases on the Temple-Color dataset. Finally, with online adaptation (i.e., “offline + online”), the trackers show slightly better results and the performance gap is only about 12% compared to the baselines with uncompressed deep features. In addition, our improved fECO and fDeepSTRCF trackers achieve much higher performance than ECOhc and STRCF, which both use hand-crafted features.

For tracking speed computation, we do not include the initial adaptation time, which will slightly reduce the average speed (about 35 FPS on a CPU). However, it is worth mentioning that the offline pretrained CF-VGG model already works well even without online adaptation.

 Trackers of different variations OTB-2013 OTB-2015 TC-128
 ECO (baseline) 71.0 69.4 60.3
 ECOhc (hand-crafted feature) 65.6 64.6 54.7
 fECO (pretrained tiny model) 66.0 (-5.0) 65.5 (-3.9) 55.4 (-4.9)
 fECO (only fidelity loss) 66.1 (-4.9) 65.1 (-4.3) 55.0 (-5.3)
 fECO (only tracking loss, single-scale) 65.2 (-5.8) 64.6 (-4.8) 54.6 (-5.7)
 fECO (only tracking loss, multi-scale) 66.3 (-4.7) 65.9 (-3.5) 55.9 (-4.4)
 fECO (fidelity + multi-scale tracking) 68.4 (-2.6) 67.9 (-1.5) 57.4 (-2.9)
 fECO (offline + online fine-tune) 68.5 (-2.5) 68.2 (-1.2) 57.4 (-2.9)
 DeepSTRCF (baseline) 69.2 68.5 59.9
 STRCF (hand-crafted feature) 66.5 64.8 54.9
 fDeepSTRCF (pretrained tiny model) 65.1 (-4.1) 65.2 (-3.3) 55.2 (-4.7)
 fDeepSTRCF (only fidelity loss) 65.6 (-3.6) 65.5 (-3.0) 55.1 (-4.8)
 fDeepSTRCF (only tracking loss, single-scale) 65.7 (-3.5) 65.4 (-3.1) 54.8 (-5.1)
 fDeepSTRCF (only tracking loss, multi-scale) 66.9 (-2.3) 66.0 (-2.5) 55.5 (-4.4)
 fDeepSTRCF (fidelity + multi-scale tracking) 69.4 (+0.2) 67.8 (-0.7) 56.9 (-3.0)
 fDeepSTRCF (offline + online fine-tune) 70.3 (+1.1) 68.6 (+0.1) 57.3 (-2.6)
TABLE IV: Comparison of tracking accuracy under different training configurations. We report AUC scores on the OTB-2013 [45], OTB-2015 [8], and Temple-Color [46] datasets. The values in brackets denote the performance gap compared with the corresponding baseline with uncompressed deep model.
Backbone Model Low-level Middle-level High-level CPU
Model Size Conv 1 Conv 2 Conv 5 FPS
 KCF VGG-M [39] 95 MB 48.0 50.6 49.2 6
 fKCF CF-VGG 1.5 MB 46.8 (-1.2) 51.0 (+0.4) 47.1 (-2.1) 48
TABLE V: Feature representation capability comparison between VGG-M and our compressed model. We present AUC scores on the OTB-2015 [8] dataset. Our fKCF achieves comparable performance on each single feature layer with its teacher.

Model Representation Capability.

The state-of-the-art CF trackers (i.e., ECO and DeepSTRCF) employ spatial regularization to reduce boundary effects in learning correlation filters. This may leave the concern that how likely the compressed CF-VGG model can maintain the presentation capability of deep models. To demonstrate the effectiveness of CF-VGG, we use the baseline KCF method [2] to evaluate the performance on each single feature level without bells and whistles. Table V shows that KCF with CF-VGG exhibits comparable performance with the original teacher network. This clearly indicates that the distilled student network almost maintains the same feature representation capability even though its model size is 63 times smaller than its teacher network.

Fig. 7: Success plots of real-time trackers (left) and non-realtime trackers (right) on the OTB-2013 [45] dataset. In the legend, we show the area-under-curve (AUC) score.
Fig. 8: Success plots of real-time trackers (left) and non-realtime trackers (right) on the OTB-2015 [8] dataset. Our trackers obviously surpass other real-time methods and even outperform most non-realtime deep trackers.

V-C Comparison with State-of-the-arts

We compare our fECO and fDeepSTRCF with 20 state-of-the-art trackers, which are mainly categorized as real-time trackers and non-realtime trackers.

  • Real-time Trackers: For comprehensive comparison, we collect recent high-performance real-time trackers including TRACA [35] (100 FPS), SiamRPN [30] (160 FPS), BACF [56] (35 FPS), CFNet [26] (65 FPS), CSR-DCF [14] (15 FPS), ACFN [57] (15 FPS), SiamFC [25] (86 FPS), Staple [21] (70 FPS), SCT4 [58] (50 FPS), and KCF [2] (270 FPS). It should be noted that some of these trackers require GPU to achieve high speed (e.g., TRACA, SiamRPN, CFNet, ACFN, and SiamFC). In contrast, our methods are free of such requirement.

  • Non-realtime Trackers: We compare with high accuracy trackers including VITAL [50] (1.5 FPS), DSLT [59] (5 FPS), CREST [60] (3 FPS), MCPF [61] (2 FPS), ADNet [62] (1 FPS), C-COT [15] (0.3 FPS), MDNet [49] (1 FPS), SRDCFdecon [63] (3 FPS), DeepSRDCF [40] (1 FPS), and HCF [4] (12 FPS). Among these trackers, only C-COT and SRDCFdecon are tested on CPU and all the other trackers rely on a high-end GPU. Although these trackers achieve state-of-the-art performance on the benchmarks, their computational load limits the practical usage. In the following experiments, we will show that our CPU real-time methods still outperform most of them.

OTB-2013 Dataset.

On the OTB-2013 benchmark, our fECO and fDeepSTRCF achieve the AUC scores of 68.7% and 70.5%, respectively. Figure 7 (left) shows that our trackers perform better over other real-time trackers such as the recent BACF [56], SiamRPN [30] and TRACA [35]. In Figure 7 (right), we can observe that our methods achieve comparable or even better results compared with the recent low-efficiency deep trackers. It should be noted that most remarkable non-realtime trackers (e.g., VITAL [50] and MDNet [49]) cannot operate at a real-time speed even with the modern GPU device, but ours are CPU real-time.

Fig. 9: Success plots of real-time trackers (left) and non-realtime trackers (right) on the Temple-Color [46] dataset. Our trackers show outstanding performance among real-time trackers and comparably favorable results among non-realtime trackers.
Trackers EAO Non-realtime ECO [5] 0.374 DSLT [59] 0.332 VITAL [50] 0.323 FlowTrack [64] 0.334 DeepSTRCF [9] 0.313 C-COT [15] 0.331 MDNet [49] 0.227 Real-time Trackers SiamRPN [30] 0.344 SA-Siam [28] 0.291 StructSiam [53] 0.264 MemTrack [65] 0.273 ECOhc [5] 0.238 STRCF [9] 0.279 BACF [56] 0.233 Staple [21] 0.295 SiamFC [25] 0.277 fDeepSTRCF 0.308 fECO 0.339 Trackers EAO Non-realtime LSART [66] 0.323 CFCF [67] 0.286 ECO [5] 0.280 C-COT [15] 0.267 MCPF [61] 0.248 DeepSTRCF [9] 0.227 DLST [48] 0.233 Real-time Trackers SiamRPN [30] 0.243 SiamDCF [48] 0.249 SA-Siam [28] 0.236 CSRDCF++ [14] 0.229 ECOhc [5] 0.238 STRCF [9] 0.162 UCT [48] 0.206 Staple [21] 0.169 SiamFC [25] 0.188 fDeepSTRCF 0.214 fECO 0.255
TABLE VI: The expected average overlap (EAO) of state-of-the-art methods on the VOT-2016 [47] (left) and VOT-2017 [48] (right) datasets. The comparative methods include the top performers on both datasets, our baseline methods (ECO [5] and DeepSTRCF [9]) and the recently proposed trackers.
IV SV OCC DEF MB FM IPR OPR OV BC LR Overall
  TRACA [35] 61.8 56.8 57.1 56.0 58.7 57.4 58.0 59.3 56.5 60.6 50.5 60.2
  SiamRPN [30] 65.7 62.0 59.4 60.8 62.6 59.8 62.3 62.3 55.8 60.9 67.8 63.7
  BACF [56] 65.3 58.8 58.4 59.2 59.5 61.5 59.1 59.3 56.0 63.5 52.0 63.3
  CFNet [26] 54.2 54.7 51.6 47.7 54.5 55.0 56.9 54.4 41.9 55.6 63.5 56.8
  CSR-DCF [14] 54.0 52.0 53.8 53.4 58.4 57.5 51.1 51.1 51.0 52.7 44.4 58.5
  ACFN [57] 56.5 56.3 54.5 53.6 56.4 57.0 54.5 54.6 51.4 54.8 52.0 57.0
  SiamFC [25] 56.7 56.6 54.3 50.9 54.6 56.6 55.7 55.9 51.4 53.0 63.0 58.2
  Staple [21] 60.4 54.3 55.3 55.9 55.7 55.1 56.2 54.8 51.2 59.0 40.3 59.1
  SCT4 [58] 52.6 44.2 50.5 51.2 53.0 54.1 52.8 51.8 43.7 55.6 29.0 53.8
  KCF [2] 48.7 39.7 44.9 44.5 46.8 46.6 47.6 45.9 39.7 50.4 28.8 48.3
  fECO 69.1 65.4 66.2 64.8 67.3 64.5 62.8 65.6 63.0 68.5 54.1 68.2
  fDeepSTRCF 68.5 66.9 65.6 64.9 67.9 66.1 63.5 66.6 62.4 67.5 64.5 68.6
TABLE VII: Attribute-based evaluation on the OTB-2015 benchmark [8]. The evaluation metric is the area-under-curve (AUC) score of the success plot. The first and second highest values are highlighted by bold and underline.
IV SV OCC DEF MB FM IPR OPR OV BC LR Overall
  ECO [5] 71.2 68.2 68.0 63.4 70.4 68.1 65.4 67.5 67.1 71.2 58.1 69.4
  fECO 69.1 65.4 66.2 64.8 67.3 64.5 62.8 65.6 63.0 68.5 54.1 68.2
     -2.1 -2.8 -1.8 +1.4 -3.1 -3.6 -2.6 -1.9 -4.1 -2.7 -4.0 -1.2
  DeepSTRCF [9] 67.5 66.8 66.2 64.1 68.3 66.5 62.9 66.6 64.8 64.6 63.7 68.5
  fDeepSTRCF 68.5 66.9 65.6 64.9 67.9 66.1 63.5 66.6 62.4 67.5 64.5 68.6
     +1.0 +0.1 -0.6 +0.8 -0.4 -0.4 +0.6 0 -2.4 +2.9 +0.8 +0.1
TABLE VIII: Attribute-based evaluation between our methods and their corresponding baselines (ECO [5] and DeepSTRCF [9]) with uncompressed networks. The AUC score is reported on the OTB-2015 dataset [8].

OTB-2015 Dataset.

OTB-2015 is a popular tracking benchmark which extends the OTB-2013 dataset with additional 50 challenging videos. On this dataset, our fECO and fDeepSTRCF exhibit the AUC scores of 68.2% and 68.6%, respectively. Figure 8 shows that our methods outperform the recent real-time trackers and perform favorably against non-realtime deep trackers. The TRACA tracker [35] uses an encoder network to reduce the feature channel and achieves high speed on GPU. In contrast, our CF-VGG not only reduces feature dimension but also greatly accelerates the feature extraction time, which brings in real-time speed on the CPU and better performance (about 8% higher in AUC). The recent SiamRPN [30] improves the SiamFC tracker [25] and achieves impressive performance. However, it needs GPU to achieve high speed and our CPU real-time methods still outperform it by about 5% in AUC score. The deep feature representation of CF-VGG enables our trackers to surpass traditional CF trackers using empirical features (e.g., BACF [56], Staple [21], and CSR-DCF [14]). Furthermore, our methods even outperform many recent deep trackers that run at only 1 FPS on GPU (e.g., VITAL [50] and MDNet [49]).

Temple-Color Dataset.

We further evaluate our trackers on the Temple-Color benchmark with 128 color videos. On the Temple-Color, our fECO and fDeepSTRCF yield the AUC scores of 57.4% and 57.3%, respectively. From the left figure in Figure 9, we can observe that our trackers perform better than state-of-the-art real-time trackers (e.g., BACF [56], Staple [21] and SiamFC [25]). Compared with the non-realtime deep trackers including C-COT [15] and MCPF [61], the improved fECO and fDeepSTRCF trackers achieve comparable performance.

VOT-2016 and VOT-2017 Datasets.

Finally, we compare our trackers with state-of-the-art methods on the VOT-2016 [47] and VOT-2017 [48] benchmarks. On the VOT benchmark, a tracker will be re-initialized when tracking failure occurs. The expected average overlap (EAO) is the evaluation metric which considers both the tracking accuracy (overlap with the ground truth box) and robustness (failure times) [68]. As shown in Table VI, our methods obviously outperform ECOhc and STRCF. This affirms that CF-VGG performs favorably against empirical features. In addition, our trackers achieve comparable or even better results than the VOT-2016 top performer C-COT [15], whose running speed is only 0.3 FPS on a CPU. Compared with other state-of-the-art and recently proposed trackers (e.g., SA-Siam [28], VITAL [50], DSLT [59], SiamRPN [30], FlowTrack [64]), our methods overall show competitive performance.

Fig. 10: Attribute-based evaluation on the OTB-2015 benchmark [8]. The evaluation metric is the area-under-curve (AUC) score of the success plot. We also put the overall performance here (the last one) for comparison convenience facing a single challenge and their combination. Only the top 6 real-time trackers are displayed for clarity. Our fDeepSTRCF and fECO algorithms perform favorably against state-of-the-art real-time trackers in various challenging scenes.

Attribute Evaluation.

All the 100 videos in OTB-2015 [8] are annotated with 11 different attributes, namely: background clutter (BC), deformation (DEF), out-of-plane rotation (OPR), scale variation (SV), occlusion (OCC), illumination variation (IV), motion blur (MB), in-plane rotation (IPR), out of view (OV), fast motion (FM) and low resolution (LR).

Fig. 11: Qualitative evaluation of our trackers (e.g., fECO, fDeepSTRCF) and six other state-of-the-art real-time trackers including SiamRPN [30], BACF [56], TRACA [35], CSR-DCF [14], Staple [21] and SiamFC [25] on 10 challenging sequences (from left to right and top to down: Bolt2, Box, Diving, DragonBaby, Girl2, Human3, Singer2, Tiger1, Soccer and Skiing, respectively). Our fECO and fDeepSTRCF trackers perform favorably against the state-of-the-arts.
Fig. 12: Failure cases of the proposed method. The videos are Ironman and Freeman4 from OTB-2015 [8]. Our compressed model struggles when an occlusion or a drastic appearance change occurs.

In Table VII and Figure 10, we show the comparison results of 10 real-time trackers (these trackers are from Section V-C) when facing the above challenging factors. The results show that our fECO and fDeepSTRCF trackers obviously outperform other competitors in almost all the challenging scenes.

In Table VIII, we further compare our methods with their teachers (i.e., ECO and DeepSTRCF) on attributed videos. From the results, we can observe that our compressed model is comparable with the teacher model in most attributes, but performs not good enough in fast motion (FM), motion blur (MB), out of view (OV) and low resolution (LR), which indicates the representation capability of our student network still has improvement room. It should be noted that the model size of our network is only 1/63 of its teacher, so the slight performance degradation is bearable since our trackers achieve superiorly balanced high performance and CPU real-time efficiency.

Qualitative Evaluation.

Figure 11 shows some comparison results of our trackers (fECO and fDeepSTRCF) and other six state-of-the-art real-time trackers including SiamRPN [30], BACF [56], TRACA [35], CSR-DCF [14], Staple [21] and SiamFC [25] on ten challenging sequences. From the results in Figure 3, we can see that our fECO and fDeepSTRCF trackers perform well on occlusion (e.g., Box, Girl2, Human3 and Soccer) and background clutter (e.g., Tiger1 and Soccer). Compared with the recent real-time deep trackers (SiamRPN [30] and TRACA [35]), our methods perform favorably against them while exhibiting the CPU real-time speed.

V-D Failure Cases

Finally, we show some failure cases of our method in Figure 12. In the video Freeman4, the target with low resolution undergoes frequent occlusions in a short span of time, while the Ironman in the second video occurs a drastic appearance change. In these cases, our compressed model is not powerful enough compared to the teacher network. In our future work, we aim to include more training data and adopt a better network structure to further enhance the representation capability of the student model.

Vi Conclusion

In this paper, we propose to learn a lightweight backbone network for real-time correlation tracking. By simultaneously compressing and transferring the teacher network pretrained on object recognition, we obtain a highly compressed lightweight model (63 smaller) as the feature backbone. Extensive experiments demonstrate that our training scheme and strategies are effective and efficient. Even though being extremely lightweight, the proposed distilled backbone network is sufficiently powerful and almost maintains the same feature representation capability as the teacher network. Leveraging our lightweight model for deep correlation tracking, the recent top CF trackers consume much less memory storage and show superiorly balanced high performance and CPU real-time efficiency.

References

  • [1] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, “Visual object tracking using adaptive correlation filters,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2010.
  • [2] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 37, no. 3, pp. 583–596, 2015.
  • [3] K. Zhang, L. Zhang, M.-H. Yang, and D. Zhang, “Fast visual tracking via dense spatio-temporal context learning,” in Proceedings of the European Conference on Computer Vision (ECCV), 2013.
  • [4] C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, “Hierarchical convolutional features for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
  • [5] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, “Eco: Efficient convolution operators for tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [6] G. Bhat, J. Johnander, M. Danelljan, F. S. Khan, and M. Felsberg, “Unveiling the power of deep tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [8] Y. Wu, J. Lim, and M.-H. Yang, “Object tracking benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 37, no. 9, pp. 1834–1848, 2015.
  • [9] F. Li, C. Tian, W. Zuo, L. Zhang, and M.-H. Yang, “Learning spatial-temporal regularized correlation filters for visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [10] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
  • [11] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
  • [12] F. Liu, C. Gong, X. Huang, T. Zhou, J. Yang, and D. Tao, “Robust visual tracking revisited: From correlation filter to template matching,” IEEE Transactions on Image Processing (TIP), vol. 27, no. 6, pp. 2777–2790, 2018.
  • [13] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
  • [14] A. Lukezic, T. Vojir, L. Cehovin Zajc, J. Matas, and M. Kristan, “Discriminative correlation filter with channel and spatial reliability,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [15] M. Danelljan, A. Robinson, F. S. Khan, and M. Felsberg, “Beyond correlation filters: Learning continuous convolution operators for visual tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016.
  • [16] C. Sun, D. Wang, H. Lu, and M.-H. Yang, “Correlation tracking via joint discrimination and reliability learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [17] M. Tang, B. Yu, F. Zhang, and J. Wang, “High-speed tracking with multi-kernel correlation filters,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [18] T. Zhang, S. Liu, C. Xu, B. Liu, and M.-H. Yang, “Correlation particle filter for visual tracking,” IEEE Transactions on Image Processing (TIP), vol. 27, no. 6, pp. 2676–2687, 2017.
  • [19] C. Ma, X. Yang, C. Zhang, and M.-H. Yang, “Long-term correlation tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [20] N. Wang, W. Zhou, and H. Li, “Reliable re-detection for long-term tracking,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2018.
  • [21] L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, and P. Torr, “Staple: Complementary learners for real-time tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [22] N. Wang, W. Zhou, Q. Tian, R. Hong, M. Wang, and H. Li, “Multi-cue correlation filters for robust visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [23] K. Zhang, J. Fan, Q. Liu, J. Yang, and W. Lian, “Parallel attentive correlation tracking,” IEEE Transactions on Image Processing (TIP), vol. 28, no. 1, pp. 479–491, 2018.
  • [24] Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, and J. L. M.-H. Yang, “Hedged deep tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [25] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fully-convolutional siamese networks for object tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016.
  • [26] J. Valmadre, L. Bertinetto, J. F. Henriques, A. Vedaldi, and P. H. Torr, “End-to-end representation learning for correlation filter based tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [27] Q. Wang, Z. Teng, J. Xing, J. Gao, W. Hu, and S. Maybank, “Learning attentions: Residual attentional siamese network for high performance online visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [28] A. He, C. Luo, X. Tian, and W. Zeng, “A twofold siamese network for real-time object tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [29] N. Wang, Y. Song, C. Ma, W. Zhou, W. Liu, and H. Li, “Unsupervised deep tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [30] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High performance visual tracking with siamese region proposal network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [31] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu, “Distractor-aware siamese networks for visual object tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [32] C. Huang, S. Lucey, and D. Ramanan, “Learning policies for adaptive tracking with deep feature cascades,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • [33] M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in British Machine Vision Conference (BMVC), 2014.
  • [34] M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. Van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [35] J. Choi, H. Jin Chang, T. Fischer, S. Yun, K. Lee, J. Jeong, Y. Demiris, and J. Young Choi, “Context-aware deep feature compression for high-speed visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [36] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” arXiv preprint arXiv:1608.08710, 2016.
  • [37] J. Ba and R. Caruana, “Do deep nets really need to be deep?” in Advances in Neural Information Processing Systems (NeurIPS), 2014.
  • [38] G. Zhu, J. Wang, P. Wang, Y. Wu, and H. Lu, “Feature distilled tracking,” IEEE Transactions on Cybernetics, vol. 49, no. 2, pp. 440–452, 2017.
  • [39] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” in British Machine Vision Conference (BMVC), 2014.
  • [40] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshop), 2015.
  • [41] Q. Wang, J. Gao, J. Xing, M. Zhang, and W. Hu, “Dcfnet: Discriminant correlation filters network for visual tracking,” arXiv preprint arXiv:1704.04057, 2017.
  • [42] M. Mueller, N. Smith, and B. Ghanem, “Context-aware correlation filter tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [43] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.
  • [44]

    A. Vedaldi and K. Lenc, “Matconvnet: Convolutional neural networks for matlab,” in

    Proceedings of the ACM International Conference on Multimedia (ACM MM), 2014.
  • [45] Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
  • [46] P. Liang, E. Blasch, and H. Ling, “Encoding color information for visual tracking: algorithms and benchmark,” IEEE Transactions on Image Processing (TIP), vol. 24, no. 12, pp. 5630–5644, 2015.
  • [47] M. Kristan, J. Matas, A. Leonardis, M. Felsberg, L. Cehovin, G. Fernández, T. Vojir, Hager, and et al., “The visual object tracking vot2016 challenge results,” in Proceedings of the European Conference on Computer Vision Workshops (ECCV Workshop), 2016.
  • [48] ——, “The visual object tracking vot2017 challenge results,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshop), 2017.
  • [49] H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [50] Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, R. W. Lau, and M.-H. Yang, “Vital: Visual tracking via adversarial learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [51] S. Pu, Y. Song, C. Ma, H. Zhang, and M.-H. Yang, “Deep attentive tracking via reciprocative learning,” in Advances in Neural Information Processing Systems (NeurIPS), 2018.
  • [52] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2012.
  • [53] Y. Zhang, L. Wang, J. Qi, D. Wang, M. Feng, and H. Lu, “Structured siamese network for real-time visual tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [54] Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang, “Learning dynamic siamese network for visual object tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • [55] I. Jung, J. Son, M. Baek, and B. Han, “Real-time mdnet,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [56] H. K. Galoogahi, A. Fagg, and S. Lucey, “Learning background-aware correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • [57] J. Choi, H. Jin Chang, S. Yun, T. Fischer, Y. Demiris, and J. Young Choi, “Attentional correlation filter network for adaptive visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [58] J. Choi, H. Jin Chang, J. Jeong, Y. Demiris, and J. Young Choi, “Visual tracking using attention-modulated disintegration and integration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [59] X. Lu, C. Ma, B. Ni, X. Yang, I. Reid, and M.-H. Yang, “Deep regression tracking with shrinkage loss,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [60] Y. Song, C. Ma, L. Gong, J. Zhang, R. Lau, and M.-H. Yang, “Crest: Convolutional residual learning for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
  • [61] T. Zhang, C. Xu, and M.-H. Yang, “Multi-task correlation particle filter for robust object tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [62]

    S. Yun, J. Choi, Y. Yoo, K. Yun, and J. Young Choi, “Action-decision networks for visual tracking with deep reinforcement learning,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [63] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Adaptive decontamination of the training set: A unified formulation for discriminative visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [64] Z. Zhu, W. Wu, W. Zou, and J. Yan, “End-to-end flow correlation tracking with spatial-temporal attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [65] T. Yang and A. B. Chan, “Learning dynamic memory networks for object tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [66] C. Sun, D. Wang, H. Lu, and M.-H. Yang, “Learning spatial-aware regressions for visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [67] E. Gundogdu and A. A. Alatan, “Good features to correlate for visual tracking,” IEEE Transactions on Image Processing (TIP), vol. 27, no. 5, pp. 2526–2540, 2018.
  • [68] M. Kristan, J. Matas, A. Leonardis, T. Vojíř, R. Pflugfelder, G. Fernandez, G. Nebehay, F. Porikli, and L. Čehovin, “A novel performance evaluation methodology for single-target trackers,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 38, no. 11, pp. 2137–2155, 2016.