1 Introduction
The study of visual tracking has begun to shift from short-term tracking to large-scale long-term tracking, roughly due to two reasons. First, long-term tracking is much closer to practical applications than short-term tracking. The average length of sequences in short-term tracking benchmarks (OTB [46], VOT2018 [23], TC128 [31], to name a few) is often at the second level, whereas the average frame length in long-term tracking datasets (such as VOT2018LT [23], VOT2019LT [24], and OxUvALT [42]) is at least at the minute level. Second, the long-term tracking task additionally requires the tracker having the capability to handle frequent disappearance and reappearance (i.e., having a strong re-detection capability)111More resources about long-term tracking can be found in https://github.com/wangdongdut/Long-term-Visual-Tracking..

ATOM* | ATOM*_LT | Ours | CLGS | SiamDW_LT | |
---|---|---|---|---|---|
F-score | 0.527 | 0.651 | 0.697 | 0.674 | 0.665 |
Pr | 0.589 | 0.685 | 0.721 | 0.739 | 0.697 |
Re | 0.477 | 0.621 | 0.674 | 0.619 | 0.636 |
Deep-learning-based methods have dominated the short-term tracking field [30, 47, 35], from the perspective of either one-shot learning [41, 2, 15, 28, 27, 12, 53, 29] or online learning [37, 10, 8, 21, 40, 7, 49, 50, 9]. Usually, the latter methods (e.g., ECO [8], ATOM [9]) are more accurate (with less training data) but slower than the former ones (e.g., SiamFC [2], SiamRPN [28]). A curious phenomenon is that few leading long-term trackers exploit online-updated short-term trackers to conduct local tracking. MBMD [51], the winner of VOT2018LT, exploits an offline-trained regression network to directly regress the target’s bounding box in a local region, and uses an online-learned verifier to make the tracker switch between local tracking and global re-detection. The recent SPLT [48] method utilizes the same SiamRPN model in [51] for local tracking. SiamFC+R [42], the best method in the OxUvALT report, equips the original SiamFC [2] with a simple re-detection scheme. An important reason is that online update is a double-edged sword for tracking. Online update captures appearance variations from both target and background, but inevitably pollutes the model with noisy samples. The risk of online update is amplified for long-term tracking, due to long-term uncertain observations.
Motivated by the aforementioned analysis, this work attempts to improve the long-term tracking performance from two aspects. First, we design a long-term tracking framework that exploits an online-updated tracker for local tracking. As seen in Figure 1, the tracking performance is remarkably improved by extending ATOM* to a long-term tracker (ATOM*_LT), but it remains worse than the CLGS and SiamDW_LT methods. Second, we propose a novel meta-updater to effectively guide the tracker’s update. Figure 1 shows that after adding our meta-updater, the proposed tracker achieves very promising tracking results.
Our main contributions can be summarized as follows.
-
A novel offline-trained meta-updater is proposed to address an important but unsolved problem: Is the tracker ready for updating in the current frame? The proposed meta-updater effectively guide the update of the online tracker, not only facilitating the proposed tracker but also having good generalization ability.
-
A long-term tracking framework is introduced on the basis of a SiamRPN-based re-detector, an online verifier, and an online local tracker with our meta-updater. Compared with other methods, our long-term tracking framework can benefit from the strength of online-updated short-term tracker at low risk.
-
Numerous experimental results on the VOT2018LT, VOT2019LT, OxUvALT, TLP and LaSOT long-term benchmarks show that the proposed method outperforms the state-of-the-art trackers by a large margin.
2 Related Work
2.1 Long-term Visual Tracking
Although large-scale long-term tracking benchmarks [23, 42] began to emerge since 2018, researchers have attached importance to the long-term tracking task for a long time (such as keypoint-based [17], proposal-based [54], detector-based [22, 32], and other methods). A classical algorithm is the tracking-learning-detection (TLD) method [22]
, which addresses long-term tracking as a combination of a local tracker (with forward-backward optical flow) and a global re-detector (with an ensemble of weak classifiers). Following this idea, many researchers
[34, 32, 42] attempt to handle the long-term tracking problem with different local trackers and different global re-detectors. Among them, the local tracker and global re-detectors can also adopt the same powerful model [32, 27, 51, 48], being equipped with a re-detection scheme (e.g., random search and sliding window). A crucial problem of these trackers is how to switch the tracker between the local tracker and the global re-detector. Usually, they use the outputs of local trackers to conduct self-evaluation, i.e., to determine whether the tracker losses the target or not. This manner has a high risk since the outputs of local trackers are not always reliable and unexpectedly mislead the switcher sometimes. The MBMD method [51], the winner of VOT2018LT, conducts local and global switching with an additional online-updated deep classifier. This tracker exploits a SiamPRN-based network to regress the target in a local search region or every sliding window when re-detection. The recent SPLT method [48] utilizes the same SiamPRN in [51] for tracking and re-detection, replaces the online verifier in [51] with an offline trained matching network, and speeds up the tracker by using their proposed skimming module. A curious phenomenon is that most top-ranked long-term trackers (such as MBMD [51], SPLT [48], and SiamRPN++ [27]), have not adopted excellent online-updated trackers (e.g., ECO [8], ATOM [9]) to conduct local tracking. One of the underlying reasons is that the risk of online update is amplified for long-term tracking, caused by long-term uncertain observations. In this work, we attempt to address this dilemma by designing a high-performance long-term tracker with a meta-updater.2.2 Online Update for Visual Tracking
For visual tracking, online update acts as a vital role to capture appearance variations from both target and its surrounding background during the tracking process. Numerous schemes have been designed to achieve this goal by using template update [6, 55, 29], incremental subspace learning [39, 43], online learning classifiers [16, 37, 8, 9], to name a few. However, online update is a double-edged sword in balancing the dynamical information description and unexpected noise introduction. Accumulating errors over a long time, collecting inappropriate samples or over-fitting to available data when the target disappears can easily degrade the tracker and lead to tracking drift, especially for long-term tracking. To deal with this dilemma, many efforts have been done at least from two aspects. The first one aims to distill the online collected samples by recovering or clustering noisy observations [43, 8]. Another effective attempt is to design some criteria for evaluating the reliability of the current tracking result, to remove the unreliable samples or reject the inappropriate update. These criteria include the confidence score [37], the maximum (MAX) response [9], peak-to-sidelobe rate (PSR) [9], average peak-to-correlation energy [44], and MAX-PSR [32]. These methods usually utilize the tracker’s output to self-evaluate this reliability. But the self-evaluation of the trackers’ reliability with its outputs has inevitable risks, especially when the tracker experiences the long-term uncertain and noisy observations. In this work, we propose a novel offline-trained meta-updater to integrate multiple cues in a sequential manner. The meta-updater outputs a binary score to indicate whether the tracker should be updated or not in the current frame, which not only remarkably improves the performance of our long-term tracker but also is easy to be embedded into other online-updated trackers. Recently, some meta-learning-based methods [25, 38, 26, 18, 5, 29] have been presented. All these methods focus on addressing the “how to update” problem (i.e., efficiently and/or effectively updating the trackers’ appearance models). By contrast, our meta-updater is designed to deal with the “when to update” problem, and it can be combined with many “how to update” algorithms to further improve the tracking performance.

3 Long-term Tracking with Meta-Updater
3.1 Long-term Tracking Framework
The overall framework is presented in Figure 2. In each frame, the local tracker takes the local search region as input, and outputs the bounding box of the tracked object. Then, the verifier evaluates the correctness of the current tracking result. If the output verification score is larger than a predefined threshold, the tracker will continue to conduct local tracking in the next frame. If the score is smaller than the threshold, we use the faster R-CNN detector [4] to detect all possible candidates in the next frame and crop the local search region regarding each candidate. Then, a SiamPRN model [51] takes each region as input and outputs corresponding candidate boxes. These bounding boxes are sent to the verifier for identifying whether there exists the target or not. When the verifier finds the target, the local tracker will be reset to adapt to the current target appearance. Before entering into the next frame, all historic information is collected and sent into the proposed meta-updater. Finally, the meta-updater guides the online trackers’ update.
In this work, we implement an improved ATOM tracker (denoted as ATOM) as our local tracker, which applies the classification branch of the ATOM method [9] for localization and exploits the SiamMask method [45]
for scale estimation
222In the original ATOM method [9], the scale estimation is conducted via an offline trained instance-aware IoUNet [20]. In practice, we have found the SiamMask method [45] can provide a more accurate scale estimation partly due to the strong supervision of pixel-wise annotations.. We use the RTMDNet method [21] as our verifier, and its verification threshold is set to 0.Strength and Imperfection. Compared with recent top-ranked long-term trackers (such as MBMD [51] and SPLT [48]), the major strength of our framework lies in embedding an online-updated local tracker into the long-term tracking framework. This idea makes the long-term tracking solution benefit from the progress of short-term trackers, and unifies the short-term and long-term tracking problems as much as possible. One imperfection is that the risk of online update is amplified due to the long-term uncertain observations (since the results of any frame except for the first one have no absolute accuracy during tracking). Thus, we propose a novel Meta-Updater to handle this problem and obtain more robust tracking performance.
3.2 Meta-Updater
It is essential to update the tracker for capturing appearance variations from both target and its surrounding background. However, the inappropriate update will inevitably make the tracker degrade and cause tracking drift. To address this dilemma, we attempt to answer an important but unsolved question: Is the tracker ready for updating in the current frame? To be specific, we propose a Meta-Updater
to determine whether the tracker should be updated or not in the present moment, by integrating historical tracking results. These historical results include geometric, discriminative, and appearance cues in a sequential manner. We introduce our meta-updater on the basis of an online tracker outputting a response map in each frame (e.g., ECO
[8], ATOM [9]). It is easy to generalize our meta-updater for other types of trackers (such as MDNet [37]).3.2.1 Sequential Information for Meta-Updater
Given an online tracker , in the -th frame, we denote the output response map as , the output bounding box as , and the result image (cropped according to ) as , respectively. The target template in the first frame is denoted as . An intuitive explanation is illustrated in Figure 3.

We develop our meta-updater by mining the sequential information, integrating geometric, discriminative, and appearance cues within a given time slice.
Geometric Cue. In the -th frame, the tracker outputs a bounding box as the tracking state, where denote the horizontal and vertical coordinates of the up-left corner and are the width and height of the target. This bounding box itself merely reflects the geometric shape of the tracked object in the current frame. However, a series of bounding boxes from consecutive frames contain the important motion information regarding the target, such as velocity, acceleration, and scale change.
Discriminative Cue. Visual tracking can be considered as a classification task to distinguish the target from its surrounding background, thus, an online tracker should have good discriminative ability itself. We define a confidence score as the maximum value of the response map (1). For some trackers that do not output any response map (e.g., MDNet [37]
), it is also not difficult to obtain this confidence score based on the classification probability or margin.
(1) |

Figure 4 indicates that the confidence score is not stable during the tracking process (see -and
-th frames). In this work, we also exploit a convolutional neural network (CNN) to thoroughly mine the information within the response map, and obtain a response vector
as(2) |
where denotes the CNN model with the parameter . The output vector implicitly encodes the reliability information of the tracker in the current frame, and is further processed by the subsequent model.
Appearance Cue. The self-evaluation of the trackers’ reliability with its outputs has inevitable risks, since online updating with noisy samples often makes the response not sensitive to appearance variations. Thus, we resort to a template matching method as a vital supplement, and define an appearance score as
(3) |
where is the embedding function to embed the target and candidates into a discriminative Euclidean space, stands for its offline trained network parameters. As presented in [33], the network
can be effectively trained with the combination of triplet and classification loss functions. The score
measures the distance between the tracked result and target template . This template matching scheme is not affected by noisy observations.Sequential Information. We integrate the aforementioned geometric, discriminative and appearance cues into a sequential matrix as , where is a column vector concentrated by , , , and . is the dimension of concentrated cues, and is a time step to balance the historical experience and current observation. This sequential information is further mined with the following cascaded LSTM scheme.
3.2.2 Cascaded LSTM
LSTM. Here, we briefly introduce the basic ideas and notions of
LSTM [14] to make this paper self-contained.
Its mathematical descriptions are presented as follows.
where
denotes the element-wise sigmoid function,
stands for the element-wise tangent operation, and is the element-wise multiplication. , , anddenote the weight matrices and bias vector requiring to be learned. The subscripts
, , , and stand for the forget gate, input gate, output gate, and memory cell, respectively. Other variables are defined as follows. (a) : the input vector to the LSTM unit; (b) : the forget gate’s activation vector; (c) : the input gate’s activation vector; (d) : the output gate’s activation vector; (e) : the hidden state vector; and (f) : the cell state vector.
Three-stage Cascaded LSTM. After obtaining the sequential features , presented in Section 3.2.1, we feed it into a three-stage cascaded LSTM model, shown in Figure 5. The time steps of three LSTMs gradually decrease to distill the sequential information and focus on the recent frames. The input-output relations are presented in Table 1. The superscript denotes the -th stage LSTM.
Finally, the output is processed by two fully connected layers to generate a binary classification score, indicating whether the tracker should be updated or not.
Input | |
---|---|
LSTM LSTM | |
LSTM LSTM | |
Output |

3.2.3 Meta-Updater Training
Sample Collection. We run the local tracker on different training video sequences333For each sequence, we initialize the target in the first frame with the groundtruth, and then track it in the subsequent frames. This strictly follows the experiment setting of online single object tracking. The tracker is online updated on its own manner., and record the tracking results in all frames. Then, we divide these results into a series of time slices, denoted as . is the video index, is the number of training sequences, and is the total frame length of the -th video. , where denotes the time step. Each time slice includes the bounding box, response map, response score, and predicted target image in the -th frame, along with the corresponding target template. See Section 3.2.1 for more detailed descriptions444The meaning of is slightly different with that of because the parameters of CNN models are also required to be trained..
Then, we determine the label of as
(4) |
where stands for the Intersection-over-Union criterion. The slices whose IoUs are between and have been not adopted in the training phases to guarantee the training convergence. is the output bounding box in the -th frame in video , and is the corresponding groundtruth555The training sequences have annotated groundtruth in every frame.. Equation (4) means that the label of a given time slice is determined based on whether the target is successfully located or not in the current (i.e., -th) frame. Figure 6 visualizes some positive and negative samples for training our meta-updater.
Model Training. In this study, the local tracker and its meta-updater are tightly-coupled. The tracker affects the sample collection process for training its meta-updater. The meta-updater will change the tracker’s performance, and further affect sample collection indirectly. Thus, we propose an iterative training algorithm, listed in Algorithm 1. The symbol is used to denote a local tracker equipped with its meta-updater . is the learned meta-updater after the -th iteration ( means no meta-updater). is set to in this work.
3.2.4 Generalization ability
The aforementioned introduction is with respect to the online-updated tracker outputting a response map. For the trackers without the response map (e.g., MDNet [37], RTMDNet [21]), we can simply remove the subnetwork , and train the meta-updater with the remaining information. For some trackers those are online updated with accumulated samples over time (such as ECO [8]), our meta-updater is able to purify the sample pool used for updating. For a given frame, if the output of the meta-updater is 0, then the current tracking results will not be added into the sample pool (i.e., not used for updating). If an ensemble of multiple online-updated trackers (such as our long-term trackers, ATOM* for local tracking and RTMDNet for verification), we can train only one meta-updater with the information from all trackers as the input, and then use it to guide all trackers’ update. Section 4.3 shows our meta-updater’s generalization ability for different trackers.
3.3 Implementation Details
All networks below are trained using the stochastic gradient decent optimizer, with the momentum of . The training samples are all from the LaSOT [11] training set.
Matching Network . The matching network adopts the ResNet-50 architecture and takes image patches as inputs. For each target, we randomly sample bounding boxes around the groundtruth in each frame. We choose the patches with IoU above as the positive data, and use the boxes with high confidence scores from the SiamRPN-based network [51] but not belonging to the target as the negative data. The batch size of the network is and we train it for iterations. The initial learning rate is and divided by every iterations. The matching network is individually trained and fixed when training the remaining networks of our meta-updater.
Subnetwork . The input response map is first resized to , processed by two convolutional layers, and then followed by a global average pooling layer. The output is a vector. This subnetwork is jointly trained with the cascade LSTMs and the two fully connected layers.
LSTMs with fully connected layers. The three-stage cascaded LSTMs have units in each LSTM cell. , and are set to , and , respectively. The forget bias is set to . The outputs are finally sent into two fully connected layers with hidden units to get the final binary value. Each training stage of LSTM has a batch size of and is trained by iterations with the learning rate of .
4 Experiments
We implement our tracker using Tensorflow on a PC machine with an Intel-i9 CPU (64G RAM) and a NVIDIA GTX2080Ti GPU (11G memory). The tracking speed is approximatively 13
fps. We evaluate our tracker on five benchmarks: VOT2018LT [23], VOT2019LT [24], OxUvALT [42], TLP [36], and LaSOT [11].4.1 Quantitative Evaluation
Tracker | F-score | Pr | Re |
---|---|---|---|
LTMU(Ours) | 0.690 | 0.710 | 0.672 |
SiamRPN++ | 0.629 | 0.649 | 0.609 |
SPLT | 0.616 | 0.633 | 0.600 |
MBMD | 0.610 | 0.634 | 0.588 |
DaSiam_LT | 0.607 | 0.627 | 0.588 |
MMLT | 0.546 | 0.574 | 0.521 |
LTSINT | 0.536 | 0.566 | 0.510 |
SYT | 0.509 | 0.520 | 0.499 |
PTAVplus | 0.481 | 0.595 | 0.404 |
FuCoLoT | 0.480 | 0.539 | 0.432 |
SiamVGG | 0.459 | 0.552 | 0.393 |
SLT | 0.456 | 0.502 | 0.417 |
SiamFC | 0.433 | 0.636 | 0.328 |
SiamFCDet | 0.401 | 0.488 | 0.341 |
HMMTxD | 0.335 | 0.330 | 0.339 |
SAPKLTF | 0.323 | 0.348 | 0.300 |
ASMS | 0.306 | 0.373 | 0.259 |
VOT2018LT. We first compare our tracker with other state-of-the-art algorithms on the VOT2018LT dataset [23], which contains 35 challenging sequences of diverse objects (e.g., persons, cars, motorcycles, bicycles and animals) with the total length of 146817 frames. Each sequence contains on average 12 long-term target disappearances, each lasting on average 40 frames. The accuracy evaluation of the VOT2018LT dataset [23] mainly includes tracking precision (Pr), tracking recall (Re) and tracking F-score. Different trackers are ranked according to the tracking F-score. The detailed definitions of Pr, Re and F-score can be found in the VOT2018 challenge official report [23].
We compare our tracker with the VOT2018 official trackers and three recent methods (i.e., MBMD [51], SiamRPN++ [27], and SPLT [48]) and report the evaluation results in Table 2. The results show that the proposed tracker outperforms all other trackers by a very large margin.
VOT2019LT. The VOT2019LT [24] dataset, containing videos with 215294 frames in total, is the most recent long-term tracking dataset. Each sequence contains on average 10 long-term target disappearances, each lasting on average 52 frames. Compared with VOT2018LT [23], VOT2019LT poses more challenges since it introduces 15 more difficult videos and some uncommon targets (e.g., boat, bull, and parachute). Its evaluation protocol is the same as that in VOT2018LT. Table 3 shows that our trackers achieves the first place on the VOT2019LT challenge.
Tracker | F-score | Pr | Re |
---|---|---|---|
LTMU(Ours) | 0.697 | 0.721 | 0.674 |
CLGS | 0.674 | 0.739 | 0.619 |
SiamDW_LT | 0.665 | 0.697 | 0.636 |
mbdet | 0.567 | 0.609 | 0.530 |
SiamRPNsLT | 0.556 | 0.749 | 0.443 |
Siamfcos-LT | 0.520 | 0.493 | 0.549 |
CooSiam | 0.508 | 0.482 | 0.537 |
ASINT | 0.505 | 0.517 | 0.494 |
FuCoLoT | 0.411 | 0.507 | 0.346 |
OxUvALT. The OxUvA long-term (denoted as OxUvALT) dataset [42] contains 366 object tracks in 337 videos, which are selected from YTBB. Each video in this dataset lasts for average 2.4 minutes, which is much longer than other commonly used short-term datasets (such as OTB2015 [46]). The targets are sparsely labeled at a frequency of 1 Hz. The dataset was divided into two disjoint subsets, dev and test. In this work, we follow the open challenge in OxUvALT, which means that trackers can use any dataset except for the YTBB validation set for training and use the OxUvALT test subset for testing. In the OxUvALT dataset, three criteria are adopted to evaluate different trackers, including true positive rate (TPR), true negative rate (TNR
) and maximum geometric mean (
MaxGM). TPR measures the fraction of present objects that are reported present as well as the location accuracy, and TNR gives the fraction of absent frames that are reported as absent. MaxGM makes a trade-off between TPR and TNR (i.e., ), which is used to rank different trackers. We compare our tracker with three recent algorithms (MBMD [51], SPLT [48] and GlobalTrack [19]) and ten algorithms reported in [42] (such as LCT [34], EBT [54], TLD [22], ECO-HC [8], BACF [13], Staple [1], MDNet [37], SINT [41], SiamFC [2], and SiamFC+R [42]). Table 4 shows that our tracker performs best in terms of MaxGM and TPR while maintaining a very competitive TNR value.Tracker | MaxGM | TPR | TNR |
---|---|---|---|
LTMU(Ours) | 0.751 | 0.749 | 0.754 |
SPLT | 0.622 | 0.498 | 0.776 |
GlobalTrack | 0.603 | 0.574 | 0.633 |
MBMD | 0.544 | 0.609 | 0.485 |
SiamFC+R | 0.454 | 0.427 | 0.481 |
TLD | 0.431 | 0.208 | 0.895 |
LCT | 0.396 | 0.292 | 0.537 |
MDNet | 0.343 | 0.472 | 0 |
SINT | 0.326 | 0.426 | 0 |
ECO-HC | 0.314 | 0.395 | 0 |
SiamFC | 0.313 | 0.391 | 0 |
EBT | 0.283 | 0.321 | 0 |
BACF | 0.281 | 0.316 | 0 |
Staple | 0.261 | 0.273 | 0 |
LaSOT. The LaSOT dataset [11] is one of the most recent large-scale datasets with high-quality annotations. It contains 1400 challenging sequences (1120 for training and 280 for testing) with 70 tracking categories, with an average of 2500 frames per sequence. In this work, we follow the one-pass evaluation (success and precision) to evaluate different trackers on the test set of LaSOT. Figure 7 illustrates both success and precision plots of our tracker and ten state-of-the-art algorithms, including Dimp50 [3], Dimp18 [3], GlobalTrack [19], SPLT [48], ATOM [9], SiamRPN++ [27], ECO(python) [8], StructSiam [52], DSiam [55], and MDNet [37]. Figure 7 shows that our tracker achieves the best results among all competing methods.
![]() |
![]() |
TLP. The TLP dataset [36] contains 50 HD videos from real-world scenarios, with an average of 13500 frames per sequence. We follow the one-pass evaluation (success and precision) to evaluate different trackers on the TLP dataset. As shown in Figure 8, our tracker achieves the best results among all competing methods.
![]() |
![]() |
4.2 Ablation Study
In this subsection, we conduct ablation analysis of our meta-updater using the LaSOT dataset [11].
Different time steps of meta-updater. First, we investigate the effects of different time steps. An appropriate time step could achieve a good trade-off between historical information and current observations. Table 5 shows that the best performance is obtained when the time step is set to .
time step | 5 | 10 | 20 | 30 | 50 |
---|---|---|---|---|---|
Success | 0.553 | 0.564 | 0.572 | 0.570 | 0.567 |
Precision | 0.548 | 0.561 | 0.572 | 0.569 | 0.565 |
Different inputs for our meta-updater. For our long-term trackers, the inputs of the meta-updater include bounding box (B), confidence score (C), response map (R), and appearance score (A). We verify their contributions by separately removing them from our meta-update. Detailed results are reported in Table 6, showing that each input contributes to our meta-updater ( means ‘without’).
different input | C | R | B | A | Ours |
---|---|---|---|---|---|
Success | 0.561 | 0.568 | 0.563 | 0.549 | 0.572 |
Precision | 0.558 | 0.566 | 0.562 | 0.540 | 0.572 |
Evaluation of iterative steps. Table 7 shows that the performance is gradually improved with the increase of .
0 | 1 | 2 | 3 | |
---|---|---|---|---|
Success | 0.539 | 0.562 | 0.568 | 0.572 |
Precision | 0.535 | 0.558 | 0.566 | 0.572 |
4.3 Discussions
Generalization ability and speed analysis. We note that our meta-updater is easy to be embedded into other trackers with online learning. To show this good generalization ability, we introduce our meta-updater into four tracking algorithms, including ATOM, ECO (the official python implementation), RTMDNet and our base tracker (using a threshold to control update). Figure 9 shows the tracking performance of different trackers without and with meta-updater on the LaSOT dataset, and it demonstrates that the proposed meta-updater can consistently improve the tracking accuracy of different trackers. Table 8 reports the running speeds of those trackers without and with the proposed meta-updater, which demonstrates that the tracking speeds decrease slightly with an additional meta-updater scheme. Thus, we can conclude that our meta-updater has a good generalization ability, which can consistently improve the tracking accuracy almost without sacrificing the efficiency.
![]() |
![]() |
Trackers | ATOM | ECO | RTMDNet | Ours-MU |
---|---|---|---|---|
FPS | 40 | 49 | 41 | 15 |
Trackers | ATOM+MU | ECO+MU | RTMDNet+MU | Ours |
FPS | 32 | 38 | 32 | 13 |
Why our meta-updater works? We run a tracker without and with its meta-updater, and record the trackers’ update state () paired with its ground truth in each frame (). means that the tracker has been updated; otherwise, has not been updated. means that the tracker can be updated; otherwise, cannot be updated. The definition of ground truth is the same as equation (4). We have the following concepts: (1) true positive (TP): ; (2) false positive (FP): ; (3) true negative (TN): ; and (4) false negative (FN): . Then, we can obtain the update precision (Pr), and update recall (Re) as Pr = TP/(TP+FP), and Re = TP/(TP+FN), respectively. A higher precision means that the tracker has been updated with less wrong observations. A higher recall means that the tracker more likely accepts to be updated with correct observations. We also define a true negative rate (TNR) to pay much attention to wrong observations as TNR = TN/(TN+FP). A higher TNR value means that the tracker rejects to be updated with wrong observations more strongly. Table 9 shows the statistic results of different trackers with and without their meta-updater modules. The usage of meta-updater slightly sacrifices the update recall, which means that a portion of correct observations have not been used to update the tracker in comparison with that without meta-updater. This phenomenon affects little on the trackers’ performance because correct observations are all for the same target and have a large amount of redundant information. In contrast, the usage of meta-updater significantly improves the Pr and TNR values, indicating that the tracker is much less polluted by wrong observations. Thus, the risk of online update will be significantly decreased.
Tracker | Pr | Re | TNR |
---|---|---|---|
RTMDNet | 0.599 | 0.993 | 0.402 |
RTMDNet+MU | 0.909 | 0.902 | 0.898 |
ECO | 0.583 | 1.000 | 0.000 |
ECO+MU | 0.852 | 0.895 | 0.803 |
ATOM | 0.765 | 0.997 | 0.310 |
ATOM+MU | 0.931 | 0.886 | 0.845 |
Ours-MU | 0.867 | 0.994 | 0.479 |
Ours | 0.952 | 0.874 | 0.862 |
5 Conclusions
This work presents a novel long-term tracking framework with the proposed meta-updater. Combined with other top-ranked trackers, our framework exploits an online-update-based tracker to conduct local tracking, which makes the long-term tracking performance benefit from the excellent short-term trackers with online update (such as ATOM). More importantly, a novel meta-updater is proposed by integrating geometric, discriminative, and appearance cues in a sequential manner to determine whether the tracker should be updated or not at the present moment. This method substantially reduces the risk of online update for long-term tracking, and effectively yet efficiently guides the tracker’s update. Numerous experimental results on five recent long-term benchmarks demonstrate that our long-term tracker achieves significantly better performance than other state-of-the-art methods. The results also indicate that our meta-updater has good generalization ability.
Acknowledgement. The paper is supported in part by National Natural Science Foundation of China under Grant No. 61872056, 61771088, 61725202, U1903215, in part by the National Key RD Program of China under Grant No. 2018AAA0102001, and in part by the Fundamental Research Funds for the Central Universities under Grant No. DUT19GJ201.
References
- [1] (2016) Staple: Complementary learners for real-time tracking. In CVPR, Cited by: §4.1.
- [2] (2016) Fully-convolutional siamese networks for object tracking. In ECCV Workshop, Cited by: §1, §4.1.
- [3] (2019) Learning discriminative model prediction for tracking. In ICCV, Cited by: §4.1.
- [4] (2019) MMDetection: open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155. Cited by: §3.1.
- [5] (2019) Deep meta learning for real-time target-aware visual tracking. In ICCV, Cited by: §2.2.
- [6] (2003) Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (5), pp. 564–577. Cited by: §2.2.
- [7] (2019) Visual tracking via adaptive spatially-regularized correlation filters. In CVPR, Cited by: §1.
- [8] (2017) ECO: efficient convolution operators for tracking. In CVPR, Cited by: §1, §2.1, §2.2, §3.2.4, §3.2, §4.1, §4.1.
- [9] (2019) ATOM: Accurate tracking by overlap maximization. In CVPR, Cited by: Figure 1, §1, §2.1, §2.2, §3.1, §3.2, §4.1, footnote 2.
- [10] (2016) Beyond correlation filters: learning continuous convolution operators for visual tracking. In ECCV, Cited by: §1.
- [11] (2019) LaSOT: A high-quality benchmark for large-scale single object tracking. In CVPR, Cited by: §3.3, §4.1, §4.2, §4.
- [12] (2019) Siamese cascaded region proposal networks for real-time visual tracking. In CVPR, Cited by: §1.
- [13] (2017) Learning background-aware correlation filters for visual tracking. In ICCV, Cited by: §4.1.
-
[14]
(2012)
Supervised sequence labelling with recurrent neural networks
. Studies in Computational Intelligence, Vol. 385, Springer. Cited by: §3.2.2. - [15] (2018) A twofold siamese network for real-time object tracking. In CVPR, Cited by: §1.
- [16] (2008) High-speed tracking with kernelized correlation filters.. In ICVS, Cited by: §2.2.
- [17] (2015) MUlti-Store Tracker (MUSTer): a cognitive psychology inspired approach to object tracking. In CVPR, Cited by: §2.1.
- [18] (2019) ReEMA: regularized and reinitialized exponential moving average for target model update in object tracking. In AAAI, Cited by: §2.2.
- [19] (2020) GlobalTrack: A simple and strong baseline for long-term tracking. In AAAI, Cited by: §4.1, §4.1.
- [20] (2018) Acquisition of localization confidence for accurate object detection. In ECCV, Cited by: footnote 2.
- [21] (2018) Real-time MDNet. In ECCV, Cited by: §1, §3.1, §3.2.4.
- [22] (2012) Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (7), pp. 1409–1422. Cited by: §2.1, §4.1.
- [23] (2018) The sixth visual object tracking VOT2018 challenge results. In ECCVW, Cited by: §1, §2.1, §4.1, §4.1, Table 2, §4.
- [24] (2019) The seventh visual object tracking VOT2019 challenge results. In ICCVW, Cited by: §1, §4.1, §4.
- [25] (2018) A memory model based on the siamese network for long-term tracking. In ECCVW, Cited by: §2.2.
- [26] (2019) Learning to update for object tracking with recurrent meta-learner. IEEE Transcations on Image Processing 28 (7), pp. 3624–3635. Cited by: §2.2.
- [27] (2019) SiamRPN++: Evolution of siamese visual tracking with very deep networks. In CVPR, Cited by: §1, §2.1, §4.1, §4.1.
- [28] (2018) High performance visual tracking with siamese region proposal network. In CVPR, Cited by: §1.
- [29] (2019) GradNet: gradient-guided network for visual object tracking. In ICCV, Cited by: §1, §2.2.
- [30] (2018) Deep visual tracking: review and experimental comparison. Pattern Recognition 76, pp. 323–338. Cited by: §1.
- [31] (2015) Encoding color information for visual tracking: algorithms and benchmark. IEEE Transcations on Image Processing 24 (12), pp. 5630–5644. Cited by: §1.
- [32] (2018) FCLT - A fully-correlational long-term tracker. In ACCV, Cited by: §2.1, §2.2.
- [33] (2019) Bag of tricks and a strong baseline for deep person re-identification. In CVPR, Cited by: §3.2.1.
- [34] (2015) Long-term correlation tracking. In CVPR, Cited by: §2.1, §4.1.
- [35] (2019) Deep learning for visual tracking: A comprehensive survey. CoRR abs/1912.00535. Cited by: §1.
- [36] (2018) Long-term visual object tracking benchmark. In ACCV, Cited by: §4.1, §4.
- [37] (2016) Learning multi–domain convolutional neural networks for visual tracking. In CVPR, Cited by: §1, §2.2, §3.2.1, §3.2.4, §3.2, §4.1, §4.1.
- [38] (2018) Meta-tracker: fast and robust online adaptation for visual object trackers. In ECCV, Cited by: §2.2.
-
[39]
(2008)
Incremental learning for robust visual tracking.
International Journal of Computer Vision
77 (1-3), pp. 125–141. Cited by: §2.2. - [40] (2018) Correlation tracking via joint discrimination and reliability learning. In CVPR, Cited by: §1.
- [41] (2016) Siamese instance search for tracking. In CVPR, Cited by: §1, §4.1.
- [42] (2018) Long-term tracking in the wild: a benchmark. In ECCV, Cited by: §1, §1, §2.1, §4.1, §4.
- [43] (2013) Online object tracking with sparse prototypes. IEEE Transcations on Image Processing 22 (1), pp. 314–325. Cited by: §2.2.
- [44] (2017) Large margin object tracking with circulant feature maps. In CVPR, pp. 4800–4808. Cited by: §2.2.
- [45] (2019) Fast online object tracking and segmentation: A unifying approach. In CVPR, Cited by: §3.1, footnote 2.
- [46] (2015) Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (9), pp. 1834–1848. Cited by: §1, §4.1.
- [47] (2020) Cooling-Shrinking Attack: Blinding the tracker with imperceptible noises. In CVPR, Cited by: §1.
- [48] (2019) Skimming-Perusal Tracking: A framework for real-time and robust long-term tracking. In ICCV, Cited by: §1, §2.1, §3.1, §4.1, §4.1, §4.1.
- [49] (2018) Correlation particle filter for visual tracking. IEEE Transactions on Image Processing 27 (6), pp. 2676–2687. Cited by: §1.
- [50] (2019) Learning multi-task correlation particle filters for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (2), pp. 365–378. Cited by: §1.
- [51] (2018) Learning regression and verification networks for long-term visual tracking. CoRR abs/1809.04320. Cited by: §1, §2.1, §3.1, §3.1, §3.3, §4.1, §4.1.
- [52] (2018) Structured siamese network for real-time visual tracking. In ECCV, pp. 355–370. Cited by: §4.1.
- [53] (2019) Deeper and wider siamese networks for real-time visual tracking. In CVPR, Cited by: §1.
- [54] (2016) Beyond local search: tracking objects everywhere with instance-specific proposals. In CVPR, Cited by: §2.1, §4.1.
- [55] (2018) Distractor-aware siamese networks for visual object tracking. In ECCV, Cited by: §2.2, §4.1.