Visual Tracking via Boolean Map Representations

10/30/2016 ∙ by Kaihua Zhang, et al. ∙ University of California, Merced 0

In this paper, we present a simple yet effective Boolean map based representation that exploits connectivity cues for visual tracking. We describe a target object with histogram of oriented gradients and raw color features, of which each one is characterized by a set of Boolean maps generated by uniformly thresholding their values. The Boolean maps effectively encode multi-scale connectivity cues of the target with different granularities. The fine-grained Boolean maps capture spatially structural details that are effective for precise target localization while the coarse-grained ones encode global shape information that are robust to large target appearance variations. Finally, all the Boolean maps form together a robust representation that can be approximated by an explicit feature map of the intersection kernel, which is fed into a logistic regression classifier with online update, and the target location is estimated within a particle filter framework. The proposed representation scheme is computationally efficient and facilitates achieving favorable performance in terms of accuracy and robustness against the state-of-the-art tracking methods on a large benchmark dataset of 50 image sequences.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 8

page 9

page 10

Code Repositories

benchmark_results

visual tracker benchmark results


view repo

benchmark_results

visual tracker benchmark results


view repo

benchmark_results

visual tracker benchmark results


view repo

benchmark_results

visual tracker benchmark results


view repo

benchmark_results

visual tracker benchmark results


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Object tracking is a fundamental problem in computer vision and image processing with numerous applications. Despite significant progress in past decades, it remains a challenging task due to large appearance variations caused by illumination changes, partial occlusion, deformation, as well as cluttered background. To address these challenges, a robust representation plays a critical role for the success of a visual tracker, and attracts much attention in recent years 

[1].

Numerous representation schemes have been developed for visual tracking based on holistic and local features. Lucas and Kanade [2] leverage holistic templates based on raw pixel values to represent target appearance. Matthews et al. [3] design an effective template update scheme that uses stable information from the first frame for visual tracking. In [4] Henriques et al. propose a correlation filter based template (trained with raw intensity) for visual tracking with promising performance. Zhang et al. [5] propose a multi-expert restoration scheme to address the drift problem in tracking, in which each base tracker leverages an explicit feature map representation via quantizing the CIE LAB color channels of spatially sampled image patches. To deal with appearance changes, subspace learning based trackers have been proposed. Black and Jepson [6] develop a pre-learned view-based eigenbasis representation for visual tracking. However, the pre-trained representation cannot adapt well to significant target appearance variations. In [7] Ross et al. propose an incremental update scheme to learn a low-dimensional subspace representation. Recently, numerous tracking algorithms based on sparse representation have been proposed. Mei and Ling [8] devise a dictionary of holistic intensity templates with target and trivial templates, and then find the location of the object with minimal reconstruction error via solving an minimization problem. Zhang et al. [9] formulate visual tracking as a multi-task sparse learning problem, which learns particle representations jointly. In [10] Wang et al. introduce regularization into the eigen-reconstruction to develop an effective representation that combines the merits of both subspace and sparse representations.

In spite of demonstrated success of exploiting global representations for visual tracking, existing methods are less effective in dealing with heavy occlusion and large deformation as local visual cues are not taken into account. Consequently, local representations are developed to handle occlusion and deformation. Adam et al. [11] propose a fragment-based tracking method that divides a target object into a set of local regions and represents each region with a histogram. In [12], He et al. present a locality sensitive histogram for visual tracking by considering the contributions of local regions at each pixel, which can model target appearance well. Babenko et al. [13] formulate the tracking task as a multiple instance learning problem, in which Haar-like features are used to represent target appearance. Hare et al. [14] pose visual tracking as a structure learning task and leverage Haar-like features to describe target appearance. In [15] Henriques et al. propose an algorithm based on a kernel correlation filter (KCF) to describe target templates with feature maps based on histogram of oriented gradients (HOG) [16]. This method has been shown to achieve promising performance on the recent tracking benchmark dataset [17] in terms of accuracy and efficiency. Kwon and Lee [18] present a tracking method that represents target appearance with a set of local patches where the topology is updated to account for large shape deformation. Jia et al. [19] propose a structural sparse representation scheme by dividing a target object into some local image patches in a regular grid and using the coefficients to analyze occlusion and deformation.

Hierarchical representation methods that capture holistic and local object appearance have been developed for visual tracking [20, 21, 22, 23]. Zhong et al. [20] propose a sparse collaborative appearance model for visual tracking in which both holistic templates and local representations are used. Li and Zhu [21] extend the KCF tracker [15] with a scale adaptive scheme and effective color features. In [22], Wang et al. demonstrate that a simple tracker based on logistic regression with a representation composed of HOG and raw color channels performs favorably on the benchmark dataset [17]. Ma et al. [23]

exploit features from hierarchical layers of a convolutional neural network and learn an effective KCF which takes account of spatial details and semantics of target objects for visual tracking.

In biological vision, it has been suggested that object tracking is carried out by attention mechanisms [24, 25]. Global topological structure such as connectivity is used to model tasks related to visual attention [26, 27]. However, all aforementioned representations do not consider topological structure for visual tracking.

In this work, we propose a Boolean map based representation (BMR) that leverages connectivity cues for visual tracking. One case of connectivity is the enclosure topological relationship between the (foreground) figure and ground which defines the boundaries of figures. Recent gestalt psychological studies suggest that the enclosure topological cues play an important role in figure-ground segregation and have been successfully applied to saliency detection [28] and measuring objectness [29, 30]. The proposed BMR scheme characterizes target appearance by concatenating multiple layers of Boolean maps at different granularities based on uniformly thresholding HOG and color feature maps. The fine-grained Boolean maps capture locally spatial structural details that are effective for precise localization and coarse-grained ones which encode much global shape information to account for significant appearance variations. The Boolean maps are then concatenated and normalized to a BMR scheme that can be just approximated by an explicate feature map. We learn a logistic regression classifier using the BMR scheme and online update to estimate target locations within a particle filter framework. The effectiveness of the proposed algorithm is demonstrated on a large tracking benchmark dataset with 50 challenging videos [17] against the state-of-the-art approaches.

The main contributions of this work are summarized as follows:

  • We demonstrate that the connectivity cues can be effectively used for robust visual tracking.

  • We show that the BMR scheme can be approximated as an explicit feature map of the intersection kernel which can find a nonlinear classification boundary via a linear classifier. In addition, it is easy to train and detect for robust visual tracking with this approach.

  • The proposed tracking algorithm based on the BMR scheme performs favorably in terms of accuracy and robustness to initializations based on the benchmark dataset with 50 challenging videos [17] against 35 methods including the state-of-the-art trackers based on hierarchical features from deep networks [23] and multiple experts with entropy minimization (MEEM) [5].

Ii Tracking via Boolean Map Representations

We present the BMR scheme and a logistic regression classifier with online update for visual tracking.

Ii-a Boolean Map Representation

The proposed image representation is based on recent findings of human visual attention [31]

which shows that momentary conscious awareness of a scene can be represented by Boolean maps. The Boolean maps are concerned with center-surround contrast that mimic the sensitivity of neurons either to dark centers on bright surrounds or vice versa 

[32]. Specifically, we exploit the connectivity cues inside a target measured by the Boolean maps which can be used for separating the foreground object from the background effectively [26, 28, 29, 30]. As demonstrated in Figure 1, the connectivity inside a target can be well captured by the Boolean maps at different scales.

Neurobiological studies have demonstrated that human visual system is sensitive to color and edge orientations [33] which provide useful cues to discriminate the foreground object from the background. In this work, we use color features in the CIE LAB color space and HOG features to represent objects. To extract the perceptually uniform color features, we first normalize each sample to a canonical size ( in our experiments), and then subsample it to a half size to reduce appearance variations, and finally transform the sample into the CIE LAB color space, denoted as ( in this work). Furthermore, we leverage the HOG features to capture edge orientation information of a target object, denoted as ( in this work). Figure 1 demonstrates that most color and HOG feature maps of the target own center-surrounded patterns that are similar to biologically plausible architecture of primates in [32]. We normalize both and to range from 0 to 1, and concatenate and

to form a feature vector

with . The feature vector is rescaled to by

(1)

where and denotes the maximal and minimal operators, respectively.

Next, in (1) is encoded into a set of vectorized Boolean maps by

(2)

where

is a threshold drawn from a uniform distribution over

, and the symbol denotes elementwise inequality. In this work, we set that is simply sampled at a fixed-step size , and a fixed-step sampling is equivalent to the uniform sampling in the limit  [28]. Hence, we have . It is easy to show that

Fig. 2: Right two columns: reconstructed LAB+HOG representations of the target by BMRs in our experiments. Left two columns: the corresponding prototypes shown in Figure 1. Some reconstructed ones with more connected structures than their prototypes are highlighted in yellow.
(3)

where and are the -th entries of and , respectively.

Proof.

Without loss of generality, we assume that , . As such, we have for all because , and for . Therefore, we have , and .

In (3), when (i.e., ) we have

(4)

In this work, we set . Although (4) may not be strictly satisfied, empirical results show that most distinct structures in can be reconstructed as demonstrated in Figure 2. Furthermore, the reconstructed representations contain more connected structures than the original ones (see the ones highlighted in yellow in Figure 2), which shows that the Boolean maps facilitate capturing global geometric information of target objects.

Based on (4), to measure the similarity between two samples and , we use the intersection function [34]

(5)

where .

To avoid favoring larger input sets [34], we normalize in (5) and define the kernel as

(6)

where is an explicit feature map function. In this work, the feature map function is defined by

(7)

where is an norm operator. We use to train a linear classifier, which is able to address the nonlinear classification problem in the feature space of for visual tracking with favorable performance. The proposed tracking algorithm based on BMR is summarized in Algorithm 1.

  Input: Normalized image patch ;
  1. Compute feature vector in (1);

  2. for all entries of , do

  3. for , do

  4. if ;

  5. ;

  6. else

  7. ;

  8. end if

  9. end for

  10. end for

  11. ;

  12. ;

  Output: BMR .
Algorithm 1 BMR

Ii-B Learning Linear Classifier with BMRs

We pose visual tracking as a binary classification problem with local search, in which a linear classifier is learned in the Boolean map feature space to separate the target from the background. Specifically, we use a logistic regressor to learn the classifier for measuring similarity scores of samples.

Let denote the location of the -th sample at frame . We assume that is the object location, and densely draw samples within a search radius centered at the current object location, and label them as positive samples. Next, we uniformly sample some patches from set , and label them as negative samples. After representing these samples with BMRs, we obtain a set of training data , where is the class label and is the number of samples. The cost function at frame is defined as the negative log-likelihood for logistic regression,

(8)

where is the classifier parameter vector, and the corresponding classifier is denoted as

(9)

We use a gradient descent method to minimize by iterating

(10)

where . In this work, we use the parameter obtained at frame to initialize in (10) and iterate 20 times for updates.

  Input: Target state , classifier parameter vector ;
  1. Sample candidate particles with the motion model in (12);

  2. For each particle , extract the corresponding image patch , and compute the BMR by Algorithm 1, and compute the corresponding observation model by (13);

  3. Estimate the optimal state by (12), and obtain the corresponding image patch ;

  4. if

  5. Update by iterating (10) until convergence, and set ;

  6. else

  7. ;

  8. end if

  Output: Target state and classifier parameter vector .
Algorithm 2 BMR-based Tracking

Ii-C Proposed Tracking Algorithm

We estimate the target states sequentially within a particle filter framework. Given the observation set up to frame , the target sate

is obtained by maximizing the posteriori probability

(11)

where , is the target state with translations and , and scale , is a dynamic model that describes the temporal correlation of the target states in two consecutive frames, and

is the observation model that estimates the likelihood of a state given an observation. In the proposed algorithm, we assume that the target state parameters are independent and modeled by three scalar Gaussian distributions between two consecutive frames, i.e.,

, where

is a diagonal covariance matrix whose elements are the standard deviations of the target state parameters. In visual tracking, the posterior probability

in (11) is approximated by a finite set of particles that are sampled with corresponding importance weights , where . Therefore, (11) can be approximated as

(12)

In our method, the observation model is defined as

(13)

where is the logistic regression classifier defined by (9).

To adapt to target appearance variations while preserving the stable information that helps prevent the tracker from drifting to background, we update the classifier parameters in a conservative way. We update by (10) only when the confidence of the target falls below a threshold . This ensures that the target states always have high confidence scores and alleviate the problem of including noisy samples when updating classifier [22]. The main steps of the proposed algorithm are summarized in Algorithm 2.

Iii Experimental Results

We first present implementation details of the proposed algorithm, and discuss the dataset and metrics for performance evaluation. Next, we analyze the empirical results using widely-adopted metrics. We present ablation study to examine the effectiveness of each key component in the proposed BMR scheme. Finally, we show and analyze some failure cases.

Iii-a Implementation Details

All images are resized to a fixed size of pixels [22] for experiments and each patch is resized to a canonical size of pixels. In addition, each canonical patch is subsampled to a half size with for color representations. The HOG features are extracted from the canonical patches that supports both gray and color images, and the sizes of HOG feature maps are the same as (as implemented in http:///github.com/pdollar/toolbox).

For grayscale videos, the original image patches are used to extract raw intensity and HOG features, and the feature dimension . For color videos, the image patches are transformed to the CIE LAB color space to extract raw color features, and the original RGB image patches are used to extract HOG features. The corresponding total dimension . The number of Boolean maps is set to , and the total dimension of BMRs is for gray videos, and for color videos, and the sampling step . The search radius for positive samples is set to . The inner search radius for negative samples is set to , where and are the weight and height of the target, respectively, and the outer search radius , where the search step is set to , which generates a small subset of negative samples. The target state parameter set for particle filter is set to , and the number of particles is set to . The confidence threshold is set to . All parameter values are fixed for all sequences and the source code will be made available to the public. More results and videos are available at http://kaihuazhang.net/bmr/bmr.htm.

Iii-B Dataset and Evaluation Metrics

For performance evaluation, we use the tracking benchmark dataset and code library [17] which includes 29 trackers and 50 fully-annotated videos. In addition, we also add the corresponding results of 6 most recent trackers including DLT [35], DSST [36], KCF [15], TGPR [37], MEEM [5], and HCF [23]. For detailed analysis, the sequences are annotated with 11 attributes based on different challenging factors including low resolution (LR), in-plane rotation (IPR), out-of-plane rotation (OPR), scale variation (SV), occlusion (OCC), deformation (DEF), background clutters (BC), illumination variation (IV), motion blur (MB), fast motion (FM), and out-of-view (OV).

We quantitatively evaluate the trackers with success and precision plots [17]. Given the tracked bounding box and the ground truth bounding box , the overlap score is defined as Hence, and a larger value of means a better performance of the evaluated tracker. The success plot demonstrates the percentage of frames with through all threshold . Furthermore, the area under curve (AUC) of each success plot serves as a measure to rank the evaluated trackers. On the other hand, the precision plot shows the percentage of frames whose tracked locations are within a given threshold distance (i.e., 20 pixels in [17]) to the ground truth. Both success and precision plots are used in the one-pass evaluation (OPE), temporal robustness evaluation (TRE), and spatial robustness evaluation (SRE) where OPE reports the average precision or success rate by running the trackers through a test sequence with initialization from the ground truth position, and TRE as well as SRE measure a trackers robustness to initialization with temporal and spatial perturbations, respectively [17]. We report the OPE, TRE, and SRE results. For presentation clarity, we only present the top 10 algorithms in each plot.

Fig. 3: Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.

Iii-C Empirical Results

1) Overall Performance: Figure 3 shows overall performance of the top 10 trackers in terms of success and precision plots. The BMR-based tracking algorithm ranks first on the success rate of all OPE, and second based on TRE and SRE. Furthermore, the BMR-based method ranks third based on the precision rates of OPE, TRE, and SRE. Overall, the proposed BMR-based tracker performs favorably against the state-of-the-art methods in terms of all metrics except for MEEM [5] and HCF [23]

. The MEEM tracker exploits a multi-expert restoration scheme to handle the drift problem, which combines a tracker and the historical snapshots as experts. In contrast, even using only a logistic regression classifier without using any restoration strategy, the proposed BMR-based method performs well against MEEM in terms of most metrics (i.e., the success rates of the BMR-based method outperform the MEEM scheme while the precision rates of the BMR-based method are comparable to the MEEM scheme), which shows the effectiveness of the proposed representation scheme for visual tracking. In addition, the HCF method is based on deep learning, which leverages complex hierarchical convolutional features learned off-line from a large dataset and correlation filters for visual tracking. Notwithstanding, the proposed BMR-based algorithm performs comparably against HCF in terms of success rates on all metrics.

Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.409 0.557 0.360 0.310 0.352 0.370 0.279 0.372 0.309 0.157 0.256
IPR (31) 0.557 0.582 0.535 0.497 0.532 0.479 0.458 0.444 0.416 0.425 0.383
OPR (39) 0.590 0.587 0.558 0.496 0.491 0.485 0.470 0.432 0.420 0.422 0.393
SV (28) 0.586 0.531 0.498 0.427 0.451 0.418 0.518 0.425 0.421 0.452 0.458
OCC (29) 0.615 0.606 0.552 0.513 0.480 0.484 0.487 0.413 0.402 0.376 0.384
DEF (19) 0.594 0.626 0.560 0.533 0.474 0.510 0.448 0.393 0.378 0.372 0.330
BC (21) 0.555 0.623 0.569 0.533 0.492 0.522 0.450 0.458 0.345 0.408 0.327
IV (25) 0.551 0.560 0.533 0.494 0.506 0.484 0.473 0.428 0.399 0.429 0.392
MB (12) 0.559 0.616 0.541 0.499 0.458 0.434 0.298 0.433 0.404 0.258 0.329
FM (17) 0.559 0.578 0.553 0.461 0.433 0.396 0.296 0.462 0.417 0.247 0.353
OV (6) 0.616 0.575 0.606 0.550 0.490 0.442 0.361 0.459 0.457 0.312 0.409
TABLE I: Success score of OPE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.
Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.517 0.897 0.490 0.379 0.534 0.538 0.305 0.545 0.349 0.156 0.303
IPR (31) 0.776 0.868 0.800 0.725 0.780 0.675 0.597 0.617 0.584 0.511 0.510
OPR (39) 0.819 0.869 0.840 0.730 0.732 0.678 0.618 0.597 0.596 0.518 0.527
SV (28) 0.803 0.880 0.785 0.680 0.740 0.620 0.672 0.639 0.606 0.552 0.606
OCC (29) 0.846 0.877 0.799 0.749 0.725 0.675 0.640 0.564 0.563 0.460 0.495
DEF (19) 0.802 0.881 0.846 0.741 0.657 0.691 0.586 0.521 0.512 0.445 0.512
BC (21) 0.742 0.885 0.797 0.752 0.691 0.717 0.578 0.585 0.428 0.496 0.440
IV (25) 0.742 0.844 0.766 0.729 0.741 0.671 0.594 0.558 0.537 0.517 0.492
MB (12) 0.755 0.844 0.715 0.650 0.603 0.537 0.339 0.551 0.518 0.278 0.427
FM (17) 0.758 0.790 0.742 0.602 0.562 0.493 0.333 0.604 0.551 0.253 0.435
OV (6) 0.773 0.695 0.727 0.649 0.533 0.505 0.429 0.539 0.576 0.333 0.505
TABLE II: Precision scores of OPE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.
Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.444 0.520 0.424 0.382 0.403 0.443 0.304 0.456 0.299 0.278 0.324
IPR (31) 0.562 0.591 0.558 0.520 0.515 0.514 0.453 0.473 0.406 0.451 0.423
OPR (39) 0.578 0.595 0.572 0.531 0.507 0.523 0.480 0.477 0.425 0.465 0.428
SV (28) 0.564 0.544 0.517 0.488 0.473 0.468 0.496 0.446 0.418 0.487 0.448
OCC (29) 0.585 0.610 0.566 0.547 0.519 0.520 0.502 0.462 0.426 0.444 0.426
DEF (19) 0.599 0.651 0.611 0.571 0.548 0.577 0.515 0.500 0.425 0.466 0.399
BC (21) 0.575 0.631 0.577 0.565 0.518 0.530 0.469 0.478 0.372 0.445 0.366
IV (25) 0.555 0.597 0.564 0.528 0.529 0.518 0.475 0.486 0.402 0.468 0.427
MB (12) 0.537 0.594 0.553 0.493 0.472 0.483 0.290 0.485 0.388 0.296 0.349
FM (17) 0.516 0.560 0.542 0.456 0.429 0.461 0.282 0.464 0.392 0.285 0.350
OV (6) 0.593 0.557 0.581 0.539 0.505 0.440 0.344 0.417 0.434 0.325 0.403
TABLE III: Success scores of TRE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.
Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.581 0.750 0.589 0.501 0.574 0.602 0.350 0.628 0.376 0.325 0.391
IPR (31) 0.767 0.851 0.802 0.728 0.725 0.716 0.581 0.650 0.569 0.582 0.572
OPR (39) 0.789 0.859 0.826 0.749 0.719 0.728 0.617 0.660 0.597 0.605 0.584
SV (28) 0.769 0.840 0.787 0.727 0.717 0.676 0.633 0.652 0.600 0.634 0.594
OCC (29) 0.791 0.854 0.788 0.758 0.726 0.705 0.633 0.631 0.579 0.560 0.550
DEF (19) 0.798 0.889 0.854 0.757 0.723 0.765 0.635 0.655 0.571 0.571 0.556
BC (21) 0.772 0.874 0.793 0.776 0.697 0.721 0.600 0.622 0.488 0.575 0.517
IV (25) 0.747 0.851 0.792 0.729 0.727 0.693 0.585 0.643 0.543 0.584 0.572
MB (12) 0.720 0.785 0.724 0.626 0.597 0.607 0.323 0.617 0.491 0.332 0.450
FM (17) 0.681 0.738 0.710 0.578 0.532 0.582 0.302 0.580 0.487 0.305 0.432
OV (6) 0.719 0.692 0.692 0.643 0.587 0.514 0.371 0.484 0.485 0.339 0.470
TABLE IV: Precision scores of TRE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.
Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.352 0.488 0.374 0.289 0.326 0.332 0.254 0.360 0.305 0.213 0.243
IPR (31) 0.487 0.537 0.494 0.450 0.460 0.438 0.399 0.410 0.380 0.405 0.357
OPR (39) 0.510 0.536 0.514 0.445 0.439 0.455 0.396 0.409 0.387 0.404 0.368
SV (28) 0.524 0.492 0.463 0.401 0.413 0.396 0.438 0.395 0.384 0.440 0.402
OCC (29) 0.524 0.543 0.510 0.445 0.434 0.449 0.398 0.405 0.384 0.381 0.354
DEF (19) 0.492 0.566 0.516 0.469 0.434 0.504 0.358 0.398 0.357 0.386 0.322
BC (21) 0.500 0.569 0.517 0.483 0.451 0.483 0.387 0.408 0.334 0.410 0.303
IV (25) 0.486 0.516 0.490 0.442 0.446 0.438 0.389 0.396 0.350 0.405 0.347
MB (12) 0.503 0.565 0.513 0.425 0.389 0.420 0.266 0.451 0.385 0.256 0.312
FM (17) 0.504 0.534 0.518 0.415 0.384 0.412 0.269 0.464 0.392 0.285 0.350
OV (6) 0.578 0.526 0.575 0.455 0.426 0.391 0.335 0.421 0.407 0.316 0.314
TABLE V: Success scores of SRE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.
Attribute BMR HCF [23] MEEM [5] KCF [15] DSST [36] TGPR [37] SCM [20] Struck [14] TLD [38] ASLA [19] DLT [35]
LR (4) 0.476 0.818 0.511 0.377 0.543 0.501 0.305 0.504 0.363 0.263 0.299
IPR (31) 0.704 0.839 0.752 0.667 0.704 0.648 0.546 0.592 0.554 0.556 0.503
OPR (39) 0.732 0.828 0.774 0.666 0.680 0.669 0.547 0.595 0.560 0.560 0.525
SV (28) 0.752 0.832 0.732 0.632 0.696 0.599 0.598 0.607 0.558 0.601 0.562
OCC (29) 0.735 0.815 0.730 0.662 0.671 0.649 0.540 0.568 0.516 0.514 0.483
DEF (19) 0.684 0.835 0.757 0.677 0.630 0.715 0.475 0.547 0.505 0.516 0.467
BC (21) 0.702 0.851 0.734 0.693 0.655 0.698 0.521 0.555 0.451 0.555 0.439
IV (25) 0.677 0.809 0.707 0.652 0.681 0.630 0.509 0.556 0.480 0.544 0.472
MB (12) 0.686 0.807 0.691 0.567 0.532 0.561 0.309 0.587 0.521 0.310 0.388
FM (17) 0.685 0.748 0.694 0.545 0.505 0.544 0.308 0.577 0.496 0.291 0.397
OV (6) 0.719 0.644 0.690 0.533 0.504 0.451 0.386 0.455 0.463 0.355 0.360
TABLE VI: Precision scores of SRE with 11 attributes. The number after each attribute name is the number of sequences. The red, blue and green fonts indicate the best, second and third performance.

2) Attribute-based Performance: To demonstrate the strength and weakness of BMR, we further evaluate the 35 trackers on videos with 11 attributes categorized by [17].

Fig. 4: Screenshots sampled results from six long sequences sylvester, mhyang, dog1, lemming, liquor, and doll.

Table I and II summarize the results of success and precision scores of OPE with different attributes. Among them, the BMR-based method ranks within top 3 with most attributes. Specifically, with the success rate of OPE, the BMR-based method ranks first on 4 out of 11 attributes while second on 6 out of 11 attributes. In the sequences with the BC attribute, the BMR-based method ranks third and its score is close to the MEEM scheme that ranks second (0.555 vs. 0.569). For the precision scores of OPE, the BMR-based method ranks second on 4 out of 11 attributes and third on 3 out of 11 attributes. In the sequences with the OV attribute, the BMR-based tracker ranks first, and for the videos with the IPR and BC attributes, the proposed tracking algorithm ranks fourth with comparable performance to the third-rank DSST and KCF methods.

Table III and IV show the results of TRE with different attributes. The BMR-based method ranks within top 3 with most attributes. In terms of success rates, the BMR-based method ranks first on 2 attributes, second on 3 attributes and third on 6 attributes. In terms of precision rates, the BMR-based tracker ranks third on 7 attributes, and first and second on the OV and OCC attributes, respectively. Furthermore, for other attributes such as LR and BC, the BMR-based tracking algorithm ranks fourth but it scores are close to the results of MEEM and KCF that rank third (0.581 vs. 0.598, and 0.772 vs. 0.776).

Table V and VI show the results of SRE with different attributes. In terms of success rates, the rankings of the BMR-based method are similar to those based on TRE except for the IPR and OPR attributes. Among them, the BMR-based tracker ranks third based on SRE and second based on TRE. Furthermore, although the MEEM method ranks higher than the BMR-based tracker in most attributes, the differences of the scores are within . In terms of precision rates, the BMR-based algorithm ranks within top 3 with most attributes except for the LR, DEF, and IV attributes.

The AUC score of success rate measures the overall performance of each tracking method [17]. Figure 3 shows that the BMR-based method achieves better results in terms of success rates than that precision rates in terms of all metrics (OPE, SRE, TRE) and attributes. The tracking performance can be attributed to two factors. First, the proposed method exploits a logistic regression classifier with explicit feature maps, which efficiently determines the nonlinear decision boundary through online training. Second, the online classifier parameter update scheme in (10) facilitates recovering from tracking drift.

Fig. 5: Overlap score plots of six long sequences shown in Figure 4.

Figure 4 shows sampled tracking results from six long sequences (each with more than 1000 frames). The total number of frames of these sequences is that accounts for about of the total number of frames (about ) in the benchmark, and hence the performance on these sequences plays an important role in performance evaluation. For clear presentation, only the results of the top performing BMR, HCF, and MEEM methods are shown. In all sequences, the BMR-based tracker is able to track the targets stably over almost all frames. However, the HCF scheme drifts away from the target objects after a few frames in the sylvester () and lemming () sequences. The MEEM method drifts to background when severe occlusions happen in the liquor sequence (). To further demonstrate the results over all frames clearly, Figure 5 shows the plots in terms of overlap score of each frame. Overall, the BMR-based tracker performs well against the HCF and MEEM methods in most frames of these sequences.

Fig. 6: Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).

Iii-D Analysis of BMR

To demonstrate the effectiveness of BMRs, we eliminate the component of Boolean maps in the proposed tracking algorithm and only leverage the LAB+HOG representations for visual tracking. In addition, we use the KCF as a baseline as it adopts the HOG representations as the proposed tracking method. Figure 6 shows quantitative comparisons on the benchmark dataset. Without using the proposed Boolean maps, the AUC score of success rate in OPE of the proposed method is reduced by . For TRE and SRE, the AUC scores of the proposed method are reduced by and , respectively without the component of Boolean maps. It is worth noticing that the proposed method, without using the Boolean maps, still outperforms KCF in terms of all metrics on success rates, which shows the effectiveness of the LAB color features in BMR. These experimental results show that the BMRs in the proposed method play a key role for robust visual tracking.

Fig. 7: Failure cases of the BMR-based tracker in the singer2 and motorRolling sequences. The results of HCF and MEEM are also illustrated.

Iii-E Failure Cases

Figure 7 shows failed results of the proposed BMR-based method in two sequences singer2 and motorRolling. In the singer2 sequence, the foreground object and background scene are similar due to the dim stage lighting at the beginning (). The HCF, MEEM and proposed methods all drift to the background. Furthermore, as the targets in the motorRolling sequence undergo from 360-degree in-plane rotation in early frames (), the MEEM and proposed methods do not adapt to drastic appearance variations well due to limited training samples. In contrast, only the HCF tracker performs well in this sequence because it leverages dense sampling and high-dimensional convolutional features.

Iv Conclusions

In this paper, we propose a Boolean map based representation which exploits the connectivity cues for visual tracking. In the BMR scheme, the HOG and raw color feature maps are decomposed into a set of Boolean maps by uniformly thresholding the respective channels. These Boolean maps are concatenated and normalized to form a robust representation, which approximates an explicit feature map of the intersection kernel. A logistic regression classifier with the explicit feature map is trained in an online manner that determines the nonlinear decision boundary for visual tracking. Extensive evaluations on a large tracking benchmark dataset demonstrate the proposed tracking algorithm performs favorably against the state-of-the-art algorithms in terms of accuracy and robustness.

References

  • [1] X. Li, W. Hu, C. Shen, Z. Zhang, A. Dick, and A. V. D. Hengel, “A survey of appearance models in visual object tracking,” ACM Transactions on Intelligent Systems and Technology, vol. 4, no. 4, p. 58, 2013.
  • [2] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision.,” in

    International Joint Conference on Artificial Intelligence

    , vol. 81, pp. 674–679, 1981.
  • [3] I. Matthews, T. Ishikawa, and S. Baker, “The template update problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 810–815, 2004.
  • [4] J. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Proceedings of European Conference on Computer Vision, pp. 702–715, 2012.
  • [5] J. Zhang, S. Ma, and S. Sclaroff, “Meem: Robust tracking via multiple experts using entropy minimization,” in Proceedings of European Conference on Computer Vision, pp. 188–203, 2014.
  • [6] M. J. Black and A. D. Jepson, “Eigentracking: Robust matching and tracking of articulated objects using a view-based representation,” International Journal of Computer Vision, vol. 26, no. 1, pp. 63–84, 1998.
  • [7] D. Ross, J. Lim, R. Lin, and M.-H. Yang, “Incremental learning for robust visual tracking,” International Journal of Computer Vision, vol. 77, no. 1, pp. 125–141, 2008.
  • [8] X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2259–2272, 2011.
  • [9] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via multi-task sparse learning,” in

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 2042–2049, 2012.
  • [10] D. Wang, H. Lu, and M.-H. Yang, “Online object tracking with sparse prototypes,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 314–325, 2013.
  • [11] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 798–805, 2006.
  • [12] S. He, Q. Yang, R. Lau, J. Wang, and M.-H. Yang, “Visual tracking via locality sensitive histograms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2427–2434, 2013.
  • [13] B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with online multiple instance learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1619–1632, 2011.
  • [14] S. Hare, A. Saffari, and P. H. Torr, “Struck: Structured output tracking with kernels,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 263–270, 2011.
  • [15] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015.
  • [16] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
  • [17] Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418, 2013.
  • [18] J. Kwon and K. M. Lee, “Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1208–1215, 2009.
  • [19] X. Jia, H. Lu, and M.-H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1822–1829, 2012.
  • [20] W. Zhong, H. Lu, and M.-H. Yang, “Robust object tracking via sparsity-based collaborative model,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1845, 2012.
  • [21] Y. Li and J. Zhu, “A scale adaptive kernel correlation filter tracker with feature integration,” in European Conference on Computer Vision-Workshops, pp. 254–265, 2014.
  • [22] N. Wang, J. Shi, D.-Y. Yeung, and J. Jia, “Understanding and diagnosing visual tracking systems,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3101–3109, 2015.
  • [23] C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, “Hierarchical convolutional features for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3074–3082, 2015.
  • [24] R. Allen, P. Mcgeorge, D. Pearson, and A. B. Milne, “Attention and expertise in multiple target tracking,” Applied Cognitive Psychology, vol. 18, no. 3, pp. 337–347, 2004.
  • [25] P. Cavanagh and G. A. Alvarez, “Tracking multiple targets with multifocal attention,” Trends in Cognitive Sciences, vol. 9, no. 7, pp. 349–354, 2005.
  • [26] A. Set and B. Set, “Topological structure in visual perception,” Science, vol. 218, p. 699, 1982.
  • [27] S. E. Palmer, Vision Science: Photons to Phenomenology, vol. 1. MIT Press, 1999.
  • [28] J. Zhang and S. Sclaroff, “Saliency detection: A boolean map approach,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 153–160, 2013.
  • [29] B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the objectness of image windows,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2189–2202, 2012.
  • [30]

    M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr, “Bing: Binarized normed gradients for objectness estimation at 300fps,” in

    Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3286–3293, 2014.
  • [31] L. Huang and H. Pashler, “A boolean map theory of visual attention.,” Psychological review, vol. 114, no. 3, p. 599, 2007.
  • [32] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 11, pp. 1254–1259, 1998.
  • [33] M. S. Livingstone and D. H. Hubel, “Anatomy and physiology of a color system in the primate visual cortex,” The Journal of Neuroscience, vol. 4, no. 1, pp. 309–356, 1984.
  • [34] K. Grauman and T. Darrell, “The pyramid match kernel: Efficient learning with sets of features,”

    The Journal of Machine Learning Research

    , vol. 8, pp. 725–760, 2007.
  • [35] N. Wang and D.-Y. Yeung, “Learning a deep compact image representation for visual tracking,” in Advances in Neural Information Processing Systems, pp. 809–817, 2013.
  • [36] M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of British Machine Vision Conference, 2014.
  • [37]

    J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in

    Proceedings of European Conference on Computer Vision, pp. 188–203, 2014.
  • [38] Z. Kalal, J. Matas, and K. Mikolajczyk, “Pn learning: Bootstrapping binary classifiers by structural constraints,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–56, 2010.