Log In Sign Up

Sparse Coding and Counting for Robust Visual Tracking

In this paper, we propose a novel sparse coding and counting method under Bayesian framwork for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve realtime processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed.


page 2

page 6

page 7

page 9


A geometric proximal gradient method for sparse least squares regression with probabilistic simplex constraint

In this paper, we consider the sparse least squares regression problem w...

Robust Structured Multi-task Multi-view Sparse Tracking

Sparse representation is a viable solution to visual tracking. In this p...

Robust Image Analysis by L1-Norm Semi-supervised Learning

This paper presents a novel L1-norm semi-supervised learning algorithm f...

High Speed Tracking With A Fourier Domain Kernelized Correlation Filter

It is challenging to design a high speed tracking approach using l1-norm...

Infrared target tracking based on proximal robust principal component analysis method

Infrared target tracking plays an important role in both civil and milit...

Accelerated Sparse Bayesian Learning via Screening Test and Its Applications

In high-dimensional settings, sparse structures are critical for efficie...

Shift-Invariance Sparse Coding for Audio Classification

Sparse coding is an unsupervised learning algorithm that learns a succin...

1 Introduction

Visual tracking plays an important role in computer vision and has many applications such as video surveillance, robotics, motion analysis and human computer interaction. Even though various algorithms have come out, it is still a challenge problem due to complex object motion, heavy occlusion, illumination change and background clutter.

Visual tracking algorithms can be roughly categorized into two major categories: discriminative methods and generative methods. Discriminative methods (e.g.Liu et al. (2009); Babenko et al. (2009); Hare et al. (2011)) view object tracking as a binary classification problem in which the goal is to separate the target object from the background. Generative methods (e.g.Jepson et al. (2003); Ross et al. (2008); Liu et al. (2014b); Zhang et al. (2014); Liu et al. (2014a)) employ a generative appearance model to represent the target’s appearance.

We focus on the generative one and will briefly review the relevant work below. Recently, sparse representation has been successfully applied to visual tracking (e.g.Mei and Ling (2009); Liu et al. (2010); Zhang et al. (2013); Jin et al. (2014)

). The trackers based on sparse representation are under the assumption that the appearance of a tracked object can be sparsely represented by a over-complete dictionary which can be dynamically updated to maintain holistic appearance information. Traditionally, the over-complete dictionary is a series of redundant object templates, however, a set of basis vectors from target subspace as dictionary is also used because an orthogonal dictionary performs as efficient as the redundant one. In visual tracking, we will call the

regularized object representation ”sparse coding” (e.g.Mei and Ling (2009)), and the regularized object representation ”sparse counting” (e.g.Pan et al. (2013)). Mei and Ling (2009) has been shown to be robust against partial occlusions, which improves the tracking performance. However, because of using redundant dictionary, heavy computational overhead in minimization hampers the tracking speed. Very recent efforts have been made to improve this method in terms of both speed and accuracy by using accelerated proximal gradient (APG) algorithm Bao et al. (2012) or modeling the similarity between different candidates Zhang et al. (2013). Different from Mei and Ling (2009), IVT Ross et al. (2008) incrementally learns a low-dimensional PCA subspace representation, which adapts online to the appearance changes of the target. To get rid of image noise, Lu et al. Wang et al. (2013b) introduce noise regularization into the PCA reconstruction, which is able to handle partial occlusion and other challenging factors. Pan et al. Pan et al. (2013) employs

norm to regularize the linear coefficients of incrementally updated linear basis (sparse counting) to remove the redundant features of the basis vectors. However, sparse counting will cause unstable solutions because of its nonconvexity and discontinuity. Although the sparse coding has good performance, it may cause biased estimation since it penalizes true large coefficients more, and produce over-penalization. Consequently, it is necessary to find a way to overcome the disadvantages of spare coding and sparse counting.

From the viewpoint of statistics, sparse representation are similar to variable selection when the dictionary is fixed. Besides, it is a bonus that Bayesian framework has been successfully applied to select variables by enforcing appropriate priors. Laplace priors were used to avoid overfitting and enforce sparsity in sparse linear model, which derives sparse coding problem. To further enforce sparsity and reduce over-penalization of sparse coding, each coefficient is assigned with a Bernoulli variable. Therefore, a novel model interpreted from a Bayesian perspective by carrying maximum a posteriori (MAP) is proposed, which turns out to be a combination of sparse coding and counting model. In paper Lu et al. (2013), Lu et al. also consider and norm under a Bayesian perspective. However, considering that there will be occlusion, illumination change and background clutter in tracking, we restraint the noise with norm. Besides, We use an orthogonal dictionary to replace the redundant object templates as similar atoms of redundant templates may cause mistake of coefficients and huge computational complexity. Lastly, We propose closed solution of regularization which is the combination of the norm and norm. However Lu et al. obtain the approximate solution by using he Greedy Coordinate Descent.

Figure 1: The comparison of coefficients, optimal candidates and reconstruction. The top is the coefficients of our method versus unconstrained, spars coding and sparse counting regularization, respectively. The bottom is the optimal candidates and reconstruction results by using unconstrained, sparse coding, sparse counting and our method under same dictionary, respectively.

Tracking results by using unconstrained regularization, sparse counting, sparse coding and our model under the same dictionary are shown in Fig. 1, respectively. As shown in Fig. 1, one can see that the coefficients of unconstrained regularization and sparse coding are actually not sparse and the target object is not tracked well. Similarly, sparse counting with sparsity coefficients sometimes cannot obtain appropriate linear combination of the orthogonal basis vectors, which will interfere with the tracking accuracy. However, we note that our method is able to reconstruct the object well and find the good candidate, then facilitating the tracking results. We also compare our model with unconstrained regularization, sparse counting, sparse coding over all 50 sequences in benchmark, the precision and success plots are shown in Fig. 2. One can see the parameter setting in the section Experimental Results.


: The contributions of this work are threefold.

(1) We propose a sparse coding and counting model from a novel Bayesian perspective for visual tracking. Compared to the state-of-the-art algorithms, the proposed method achieves more reliable tracking results.

(2) We propose closed solution of combining the norm and norm based regularization in a unique one.

(3) Although the sparse coding and counting related minimization is an NP-hard problem,we show that the proposed model can be efficiently estimated by the proposed APG method. This makes our tracking method computationally attractive in general and comparable in speed with SP method Wang et al. (2013b) and the accelerated tracker Bao et al. (2012).

Figure 2: Precision and success plots of overall performance comparison among unconstrained regularization, sparse counting, sparse coding and ours for the 50 videos in the benchmark. The mean precision scores are reported in the legends.

Visual Tracking based on the Particle Filter

In this paper, we employ a particle filter to track the target object. The particle filter provides an estimate of posterior distribution of random variables related to Markov chain. Given a set of observed image vectors

up to the -th frame and target state variable that describes the six affine motion parameters, the posterior distribution based on the Bayesian theorem is estimated by:


where is the observation model that estimates the likelihood of an observed image patch belonging to the object class, and is the motion model that describes the state transition between consecutive frames.

The Motion Model: The motion model

models the parameters by independent Gaussian distribution around the counterpart in

, where

is a diagonal covariance matrix whose elements are the variances of the affine parameters. In the tracking framework, the optimal target state

is obtained by the maximal approximate posterior (MAP) probability:

, where indicates the -th sample of the state .

The observation model: In this paper, we assume that the tracked target object is generated by a subspace (spanned by and centered at ) with corruption (i.i.d Gaussian Laplacian noise),

where denotes an observation vector centered at , the columns of are orthogonal basis vectors of the subspace, indicates the coefficients of basis vectors, and

stand for the Gaussian noise and Laplacian noise vector respectively. the Gaussian component models small dense noise and the Laplacian one aims to handle outliers. As proposed by 

Wang et al. (2013a), under the i.i.d Gaussian-Laplacian noise assumption, the distance between the vector and the subspace is the least soft threshold squares distance:

Thus, for each observation corresponding to a predicted state , the observation model that is set to be


where and are the optimal solution of Eq. (5) which will be introduced in detail in next section, and is a constant controlling the shape of the Gaussian kernel.

Model Update: It is essential to update the observation model for handling appearance change of the target in visual tracking. Since the error term can be used to identify some outliers (e.g., Laplacian noise, illumination), we adopt the strategy proposed by Wang et al. (2013a) to update the appearance model using the incremental PCA with mean update Ross et al. (2008) as follows,


where , , and are the i-th elements of , , and , respectively, is the mean vector computed the same as Ross et al. (2008).

Object Representation under Bayesian Framework

Based on the discussion in aforementioned Section, If is viewed as the vectorized target region, it can be represented by an image subspace with corruption,


Pan et al. (2013) shows that sparse counting can remove redundant features (e.g., background portions) while selecting useful parts in the subspace. However, sparse counting will cause unstable solutions because of its nonconvexity and discontinuity. Sparse coding may produce over-penalization, despite its good stability. Considering that Bayesian framework has the capacity to encode prior knowledge and to make valid estimation of uncertainty, a novel model combining sparse coding and sparse counting is proposed for visual tracking. The model is


where , denotes the norm which counts the number of non-zero elements, and denote and norms, respectively, , and are regularization parameters, and

is an identity matrix. The term

is used to reject outliers (e.g., occlusions), while and are used to select the useful subspace features.

Next we will introduce the aforementioned model under Bayesian framework in detail. The joint posterior distribution of and based on the Bayesian theorem can be written as


where , , , , , denote the priors on the noisy vectorized target region, the coefficient vector , the index vector (), the Laplacian noise, and the noise level, respectively. In Eq. (6), the parameters , , , , and are the relevant constant parameters of the priors.

With the definition of the index variable , Eq. (4) can be rewritten as


We generally assume that the noise follows the Gaussian distribution, . We treat the Laplacian noise term as missing values with the same Laplacian prior. Therefore, the Prior has the follow distribution:


To enforce sparsity, the coefficients are assumed to follow Laplace distribution.


Our goal is to remove redundant features while preserving the useful parts in the dictionary. As Laplace priors resulting sparse coding may lead to over penalization on the large coefficients, we assume the index variable of each coefficient to be a Bernoulli variable to enforce sparsity and reduce over penalization.


where . Here, the Bernoulli prior on means that will have probability to be 1 and to be 0, if the prior information is known.

The noise is aims at handling outliers, so it follows Laplace distribution:


The variances of noises are assigned with Inverse Gamma prior as follow:


where denotes the gamma function.

Then, the optimal are obtained by the MAP probability. After taking the negative logarithm, the formula is


Combining the aforementioned Eq. (6), Eq. (8), Eq. (9), Eq. (10), Eq. (11), Eq. (12), we have


With fixing , Eq. (14) can be rewritten as


where . With and , Eq. (15) can be rewritten as


By observing the objective function in Eq. (16), it can be found that the essential regularization in Eq. (16) is a combination of the sparse coding and the sparse counting. With a fixed appropriate orthogonal dictionary D, Eq. (16) can be written as an optimization problems Eq. (5).

Theory of Fast Numerical Algorithm

As we know, APG is an excellent algorithm for convex programming Lin et al. (2009); Tseng (2008) and has been used in visual tracking. In this section, we propose a fast numerical algorithm for solving the proposed nonconvex and nonsmooth model by using APG approach. The experimental results show that it can converge to a solution quickly and achieve attractive performance. Besides, the closed solution of the combining and based regularization is provided.

APG Algorithm for Solving Eq. (17)

Eq. (5) contains two subproblem: one is solving given fixed , the other one is solving given fixed , the formula is shown as follow


Solving Eq. (17) is an NP-hard problem because it involves a discrete counting metric. We adopt a special optimization strategy based on the APG approach Lin et al. (2009), which ensures each step be solved easily. In APG Algorithm, we need to solve


where , , , and is a Lipschitz constant.

The solutions of Eq. (18) can be obtained by


where , and is defined as


The numerical algorithm for solving Eq. (17) is summarized in Algorithm 1. Due to the orthogonality of , Algorithm 1 converges fast, and its computation cost does not increase compared to the solver of regularized model.

  Initialize: Set initial guesses , , and .
  while not convergence or termination do
  Step 1: ;
  Step 2: ;
  Step 3: ;
  Step 4: ;
  Step 5: , .
  end while
Algorithm 1 Fast numerical algorithm for solving Eq. (17)

Closed Solution of combining and regularization

This subsection mainly focus on a sparse combinatory model which combines and norm together as the regularizer term


where , and denotes norm: if , then, and , otherwise.

lemma. The optimal solution of the Eq. (21) is defined as


The proof can be found in Supporting Information. If , the Eq. (21) changes into


where and . It is obvious that Eq. (21) can be turned into


So it can be seen as a sequence of optimization of , and each can be solved by Lemma. More analysis about combination of and regularization can be found in Supporting Information.

1.1 Analysis of the combinatory model Eq. (23)

Figure 3: Analysis about combination of and regularization.

(a) shows the closed solutions of linear regression,

, , regularized regression, respectively. (b) shows the sparsity threshold changes of , and regularized regression, respectively.

In Eq. (23), if we set and , the model degenerates to the linear regression. If we set , Eq. (23) reduces to regularized regression, while becoming regularized regression when . S2 Fig. 3 (a) shows the closed solutions of these four cases. We set in Eq. (23) ( regularized regression), in regularized regression, and in regularized regression. We note that regularized regression has the same sparsity as regularized regression, while causing little over penalization than regularized regression. In S2 Fig. 3 (b), sparsity threshold changes of , and regularized regression are shown, respectively. When changes from 0 to 1, the sparsity threshold of varies from that of to the threshold of . Besides, it is obvious that the threshold of is larger than those of and in interval .

Orthogonal Dictionary learning for Visual Tracking

In this section, we demonstrate dictionary learning in detail through three parts: dictioanry initialization, orthogonal dictionary update and dictionary reinitialization.

Dictioanry Initialization: There are two schemes to initialize the orthogonal dictionary, one is doing PCA for the set of initial first frames , the other is doing RPCA for . When initial frames do not undergo corruption (e.g., occlusion or illumination), we do PCA for instead of RPCA. The whole process of PCA is doing skinny SVD for and get the basis vectors of column space as the initial dictionary. However, when initial frames have large sparse noise, RPCA is selected to get the intrinsic low-rank features , which can be obtained by solving Zhang et al. (2014):


When solving Eq. (25), the skinny SVD of is readily available: , and is the initial orthogonal dictionary. Fig. 4 (a) shows that PCA initialization and RPCA initialization both perform well when the initial first frames have little noise. The initial frames is generally clean, therefore, we choose PCA initialization as the default.

Figure 4: Comparison of PCA process to RPCA process. The upper portion of the image is the tracking frame. the middle of the image consists of three sub-pictures, the left is the mean image, the middle is the reconstruction result, and the right is the Lapalace noise. the bottom of the image is the top ten basis vectors of dictionary. (a) shows the tracking results of PCA and RPCA dictionary initialization. The tracking performance with and without RPCA reinitialization is shown in (b).

Orthogonal Dictionary Update: As the appearance of a target may change drastically, it is necessary to update the orthogonal dictionary . Here we adopt an incremental PCA algorithm Levey and Lindenbaum (2000) to update the dictionary.

Dictionary reinitialization: When the tracker is prone to drift, dynamically reinitializing dictionary to obtain the intrinsic subspace features is needed. We adopt the strategy proposed by Zhang et al. (2014). The reinitialization is performed at -th frame if , where is the noise item at -th frame, is the length of vector, and is a threshold parameter (generally 0.5). If , we reinitialize the dictionary in the same way as initialization of dictionary by doing RPCA, but in Eq. (25) is different. Here, consists of optimal candidate observations respectively from the initial (generally 10) frames and the latest frames (we set ). Fig. 4 (b) compares the tracking performance within and without RPCA reinitialization when the object undergoes variable illumination. After reinitializing dictionary, our tracker retracks the object, so reinitializing dictionary is efficient to improve the reconstruction ability. In Algorithm 2, we summarize the overall tracking process for frame .

Experimental Results

In this section, we compare the performance of our proposed tracker with several state-of-the-art tracking algorithms, such as TLD Kalal et al. (2012), IVT Ross et al. (2008), ASLA Jia et al. (2012), APG Bao et al. (2012), MTT Zhang et al. (2013), SP Wang et al. (2013b), SPOT Zhang and Maaten (2013), FOT Vojíř and Matas (2014), SST Zhang et al. (2015), SCM Zhong et al. (2012), MIL Babenko et al. (2009), and Struck Hare et al. (2011), on a benchmark Wu et al. (2013) with 50 challenge video sequences. Our tracker is implemented in MATLAB and runs at 4.2 fps on an Intel 2.53 GHz Dual-Core CPU with 8GB memory, running Windows 7 and Matlab (R2013b). We empirically set , , , and the Lipschitz constant L = 2. Before solving Eq. (5), all the candidates are centralized. Considering the efficiency, the updated orthogonal dictionary is taken columns corresponding to the

largest eigenvalues of PCA or RPCA, 600 particles are adopted, and the model is incrementally updated every

frames. In the following, we present both qualitative and quantitative comparisons of above mentioned methods.

  Initialization: Initialize orthogonal dictionary by performing PCA on .
  Input: State () and orthogonal dictionary .
  Step 1: Draw new samples from and obtain corresponding candidates .
  Step 2: Obtain and using (17).
  Step 3: For each candidate, calculate the observation probability using (2).
  Step 4: Find the tracking result patch with the maximal observation likelihood and its corresponding noise .
  Step 5: perform an incremental PCA algorithm to update the orthogonal dictionary every five frames. If , reinitializing Dictionary at -th frame using (25).
  Output: State and corresponding image patch; orthogonal dictionary .
Algorithm 2 Robust Visual Tracking Using Our tracker

Qualitative Evaluation

Fig. 5 were taken the frames of the 50 videos to show the Qualitative results for our method, compared with the top-performing SP and SST. We choose some examples from part of 50 sequences to illustrate the effectiveness of our method. Fig. 6 shows the visualization results.

Figure 5: Qualitative results for our method, compared with SP and SST. Reprinted from Wu et al. (2013) under a CC BY license, with permission from Yi Wu, original copyright 2013.

Heavy Occlusion: Fig. 6 (a) and (b) show four challenging sequences with heavy occlusion. In Faceocc1 and Faceocc2, the targets undergo with heavy occlusion and in-plane rotation, it can be seen that our method outperforms the other tracking algorithms. Freeman4 and David3 demonstrate that the proposed method can capture the accurate location of objects in terms of position, and scale when the target undergoes severe occlusion (e.g., Freeman4 #0144 and David3 #0085). However, IVT, APG, MIL, SP, SCM, ASLA, TLD, SPOT, FOT, SST, MTT, and Struck methods drift away from the target object when occlusion occurs. For these four sequences, the IVT method performs poorly since conventional PCA is not robust to occlusions. Although APG and SP utilize sparsity to model outliers, it is observed that their occlusion detection are not stable when drastic change of appearance happens. In contrast, our method is robust to heavy occlusion. This is because our combination of and regularized appearance model can exactly reconstruct the object.

Figure 6: Sampled tracking results of evaluated algorithms on fourteen challenging image sequences. Reprinted from Wu et al. (2013) under a CC BY license, with permission from Yi Wu, original copyright 2013.

Fast Motion: Fig. 6 (c) show the sequences Boy and Jumping with fast motion. It is difficult to predict the locations of the tracked objects when they undergo abrupt motion. In Boy, the captured images are blurred seriously, but Struck and our method track the target faithfully throughout the images. IVT, MTT, ALSA, SCM and SST methods drift away seriously. We note that most of the other trackers have drift problem due to the abrupt motion in sequence Jumping. In contrast, the SST and our method successfully track the target for whole video.

Drastic Pose, Scale and Illumination Changes: In Fig. 6 (d) and (e), we test five challenging sequences with drastic pose, scale and illumination change. Fish and Tiger1 chips contain significant illumination variation. We can see that the APG, MTT, and MIL methods are less effective in these cases (e.g., Fish #0305 and Tiger1 #0240). In Singer2 and Jogging-2, other trackers drift away when objects under variable illumination, and pose variation (e.g., Singer2 #0110 and Jogging-2 #0100 ), however, our method still performs well. Our method also achieves good performance in CarScale with scale variation (e.g., CarScale #0204). For subspace-based approaches, they may fail to update the appearance model as the calculation of coefficients in their models may have redundant background features. Our method can successfully adapt to variable drastic changes since the combination of sparse coding and sparse counting is not merely stable but also applicable to obtain the intrinsic features of the subspace.

Background Clutters: Fig. 6 (f) demonstrates the tracking results in Deer, Baskerball, and Football with background clutter. Baskerball is a difficult sequence because it contains cluttered background, illumination change, heavy occlusion and non-rigid pose variation. Unless our tracker, none of the compared algorithms can work well on it(e.g., Baskerball #0486 and #0614). As shown in Deer and Football, our tracker performs relatively well (e.g., Deer #0031 and Football #304) as it has excluded background clutters in the sparse errors, but TLD, FOT, and MIL fail.

Figure 7: Precision and success plots over all the 50 sequences. The mean precision scores are reported in the legends.

Quantitative Evaluation

Faceocc1 0.58 0.73 0.32 0.76 0.70 0.79 0.74 0.60 0.79 0.79 0.60 0.73 0.80
Faceocc2 0.62 0.73 0.65 0.69 0.75 0.59 0.69 0.64 0.63 0.73 0.67 0.79 0.69
Freeman4 0.22 0.15 0.13 0.34 0.22 0.17 0.01 0.11 0.18 0.26 0.05 0.17 0.41
David3 0.10 0.48 0.43 0.38 0.10 0.46 0.77 0.41 0.30 0.41 0.54 0.29 0.73
Boy 0.66 0.26 0.37 0.73 0.50 0.36 0.57 0.64 0.36 0.38 0.49 0.76 0.81
Jumping 0.66 0.12 0.23 0.15 0.10 0.70 0.01 0.20 0.16 0.62 0.12 0.52 0.71
Fish 0.81 0.77 0.85 0.34 0.16 0.83 0.83 0.78 0.86 0.75 0.45 0.85 0.87
Tiger1 0.38 0.10 0.29 0.31 0.26 0.10 0.70 0.19 0.16 0.16 0.12 0.15 0.61
Singer2 0.22 0.04 0.04 0.04 0.04 0.04 0.75 0.21 0.04 0.17 0.51 0.04 0.62
Jogging-2 0.66 0.14 0.14 0.15 0.13 0.73 0.20 0.12 0.12 0.73 0.14 0.20 0.74
CarScale 0.45 0.63 0.61 0.50 0.49 0.60 0.01 0.35 0.55 0.59 0.41 0.41 0.81
Deer 0.60 0.03 0.03 0.60 0.61 0.72 0.72 016 0.62 0.07 0.12 0.74 0.82
Basketball 0.02 0.11 0.39 0.23 0.19 0.23 0.01 0.17 0.20 0.46 0.22 0.20 0.63
Football 0.49 0.56 0.53 0.55 0.58 0.69 0.01 0.55 0.40 0.49 0.59 0.53 0.59
Average 0.46 0.34 0.36 0.41 0.34 0.50 0.43 0.37 0.39 0.44 0.39 0.41 0.70
FPS 21.74 27.83 7.48 2.47 0.99 2.35 376.48 2.12 0.37 28.06 10.01 4.27
Table 1: Average Overlap Rate (in pixels) and average frame per second (FPS). The best and the second results are shown in BOLD fonts and BOLD fonts, respectively.

We use two metrics to evaluate the proposed algorithm with other state-of-the-art methods. The first metric is the center location error measured with manually labeled ground truth data. The second one is the overlap rate, i.e., , where is the tracking bounding box and is the ground truth bounding box. The larger average scores mean more accurate results.

Table 1 shows the average overlap rates. Table 2 reports the average center location errors (in pixels) where a smaller average error means a more accurate result. As can be seen from the table, the most sequences generated by our method have lower average error and higher overlap rate values. We provide the precision and success plots in Fig. 7 to evaluate our performance over all the 50 sequences. The evaluation parameters are set as default in Wu et al. (2013). We note that the our algorithm performs well for the videos with occlusion, deformation, in plane rotation, and out of plane rotation based on the precision metric and the success rate metric as shown in Fig. 8 and Fig. 9 respectively. Both table and figures show that our method achieves favorable performance against other state-of-the-art methods.

To further compare the running time of four subspace-based tracking algorithms (i.e. IVT, APG, SP and our method), we calculated the average Frames Per Second (FPS) for image patch (see the last row of Table 1). For APG, we reported FPS for its APG acceleration. It can be seen that IVT is quite faster than other trackers as its computation only involves matrix-vector multiplication. Both SP and our method are faster than APG. It is also observed that our method is much faster than SP. This is due to the different choices of the optimization scheme. SP adopts a naive altering minimization strategy, in contrast, our method is efficiently solved by APG.

Faceocc1 27.37 18.42 78.06 17.33 21.00 14.14 17.17 29.00 13.00 13.04 29.86 18.78 12.88
Faceocc2 12.28 7.42 19.35 12.76 9.836 10.43 11.78 11.94 12.82 5.96 9.02 13.60 5.50
Freeman4 39.18 43.04 70.24 22.12 23.55 79.66 108.70 54.66 56.20 56.20 62.07 48.70 10.39
David3 208.00 51.95 87.76 90.00 341.33 8.74 6.27 33.40 104.50 73.09 29.68 106.50 5.79
Boy 4.49 91.25 106.07 7.03 12.77 58.09 8.93 5.79 66.97 51.02 12.83 3.84 2.57
Jumping 5.94 61.56 46.08 83.75 84.57 4.72 120.37 19.83 45.70 6.54 65.89 9.99 4.99
Fish 6.54 5.67 3.85 29.43 45.50 3.99 4.52 6.50 3.14 8.54 24.14 3.40 3.08
Tiger1 49.45 106.61 55.87 58.45 64.39 124.36 15.93 73.49 93.49 93.49 108.93 128.70 18.64
Singer2 58.32 175.46 175.28 180.87 209.69 178.39 13.73 57.62 175.28 113.63 22.53 174.32 14.45
Jogging-2 13.56 138.22 169.87 145.85 157.12 3.61 72.23 169.16 442.77 4.15 132.99 107.687 5.88
CarScale 22.60 11.90 24.64 79.78 87.61 13.36 207.01 106.20 87.05 33.38 33.47 36.43 7.66
Deer 30.93 182.69 160.06 24.19 18.91 6.84 13.95 80.30 13.81 103.54 100.73 5.27 4.59
Basketball 213.86 107.11 82.64 137.53 106.80 39.79 169.86 118.02 105.93 52.90 91.92 118.6 7.92
Football 14.26 14.34 15.00 15.11 13.67 5.22 202.03 13.36 17.21 16.30 12.09 17.31 7.28
Average 50.48 72.54 78.20 64.58 85.48 39.38 69.46 55.66 88.42 48.26 48.92 49.17 7.97
FPS 21.74 27.83 7.48 2.47 0.99 2.35 376.48 2.12 0.37 28.06 10.01 4.27
Table 2: Average Center Location Error(in pixels) and average frame per second (FPS). The best and the second results are shown in BOLD fonts and BOLD fonts, respectively.
Figure 8: The plots of OPE with attributes based on the precision metric.
Figure 9: The plots of OPE with attributes using the success rate metric.


In this paper, we propose sparse coding and counting method under Bayesian framwork for robust visual tracking. The proposed method combines regularization and regularized sparse representation in a unique one, therefore, it has better ability to sparsely represent an object and the reconstruction result are also better. Besides, to solve the proposed model, we develop a fast and efficient APG algorithm. Moreover, the closed solution of the combination of norm and norm regularization is provided. Extensive experiments testify to the superiority of our method over state-of-the-art methods, both qualitatively and quantitatively.


This work is partially supported by the National Natural Science Foundation of China (Nos. 61300086, 61432003, 61301270, 61173103, 91230103), the Fundamental Research Funds for the Central Universities (DUT15QY15), the Open Project Program of the State Key Laboratory of CAD&CG, Zhejiang University, Zhejiang, China (No. A1404), and National Science and Technology Major Project (Nos. 2013ZX04005-021, 2014ZX001011).

Appendix: Proof of Lemma Closed Solution of combining and regularization

  • First, we denote . It is obvious that if , then . Then we need to discuss the case that :

    1. if , then . Writing its K.K.T condition, we get , and the objective value is .

    2. if , then . It is easy to get , and the objective value is .

    Then, we need to compare these three cases, if , we have . Combining with , we have . Similarly, if , then we have . And , otherwise.


  • Babenko et al. (2009) B. Babenko, M.H. Yang, S.J. Belongie, Visual tracking with online multiple instance learning, in: CVPR, pp. 983–990.
  • Bao et al. (2012) C. Bao, Y. Wu, H. Ling, H. Ji, Real time robust tracker using accelerated proximal gradient approach, in: CVPR, pp. 1830–1837.
  • Hare et al. (2011) S. Hare, A. Saffari, P.H.S. Torr, Struck: Structured output tracking with kernels, in: ICCV, pp. 263–270.
  • Jepson et al. (2003) A.D. Jepson, D.J. Fleet, T.F. El-Maraghi, Robust online appearance models for visual tracking, IEEE TPAMI 25 (2003) 1296–1311.
  • Jia et al. (2012) X. Jia, H. Lu, M.H. Yang, Visual tracking via adaptive structural local sparse appearance model, in: CVPR, pp. 1822–1829.
  • Jin et al. (2014) W. Jin, R. Liu, Z. Su, C. Zhang, S. Bai, Robust visual tracking using latent subspace projection pursuit, in: ICME, pp. 1–6.
  • Kalal et al. (2012) Z. Kalal, K. Mikolajczyk, J. Matas, Tracking-learning-detection, IEEE TPAMI 34 (2012) 1409–1422.
  • Levey and Lindenbaum (2000) A. Levey, M. Lindenbaum, Sequential karhunen-loeve basis extraction and its application to images, IEEE Trans. on IP 9 (2000) 1371–1374.
  • Lin et al. (2009) Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, Y. Ma, Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix, Technical Report, UIUC, 2009.
  • Liu et al. (2010) B. Liu, L. Yang, J. Huang, P. Meer, L. Gong, C. Kulikowski, Robust and fast collaborative tracking with two stage sparse optimization, in: ECCV, 2010, pp. 624–637.
  • Liu et al. (2009) R. Liu, J. Cheng, H. Lu, A robust boosting tracker with minimum error bound in a co-training framework, in: ICCV, pp. 1459–1466.
  • Liu et al. (2014a) R. Liu, W. Jin, Z. Su, C. Zhang, Latent subspace projection pursuit with online optimization for robust visual tracking, IEEE MultiMedia 21 (2014a) 47–55.
  • Liu et al. (2014b) R. Liu, Z. Lin, Z. Su, J. Gao, Linear time principal component pursuit and its extensions using filtering, Neurocomputing 142 (2014b) 529–541.
  • Lu et al. (2013)

    X. Lu, Y. Wang, Y. Yuan, Sparse coding from a bayesian perspective, IEEE Transactions on Neural Networks and Learning Systems 24 (2013) 929–939.

  • Mei and Ling (2009) X. Mei, H. Ling, Robust visual tracking using minimization, in: ICCV, pp. 1436–1443.
  • Pan et al. (2013) J. Pan, J. Lim, Z. Su, M.H. Yang, -regularized object representation for visual tracking, BMVC (2013).
  • Ross et al. (2008) D.A. Ross, J. Lim, R.S. Lin, M.H. Yang, Incremental learning for robust visual tracking, IJCV 77 (2008) 125–141.
  • Tseng (2008) P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, submitted to SIAM J. Optimiz., 2008.
  • Vojíř and Matas (2014) T. Vojíř, J. Matas, The enhanced flock of trackers, in: Registration and Recognition in Images and Videos, Springer, 2014, pp. 113–136.
  • Wang et al. (2013a) D. Wang, H. Lu, M.H. Yang, Least soft-thresold squares tracking, in: CVPR, pp. 2371–2378.
  • Wang et al. (2013b) D. Wang, H. Lu, M.H. Yang, Online object tracking with sparse prototypes, IEEE TIP 22 (2013b) 314–325.
  • Wu et al. (2013) Y. Wu, J. Lim, M.H. Yang, Online object tracking: A benchmark, in: CVPR, pp. 2411–2418.
  • Zhang et al. (2014) C. Zhang, R. Liu, T. Qiu, Z. Su, Robust visual tracking via incremental low-rank features learning, Neurocomputing 131 (2014) 237–247.
  • Zhang and Maaten (2013) L. Zhang, L. Maaten, Structure preserving object tracking, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1838–1845.
  • Zhang et al. (2013) T. Zhang, B. Ghanem, S. Liu, N. Ahuja, Robust visual tracking via structured multi-task sparse learning, IJCV 101 (2013) 367–383.
  • Zhang et al. (2015) T. Zhang, S. Liu, C. Xu, S. Yan, B. Ghanem, N. Ahuja, M.H. Yang, Structural sparse tracking, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 150–158.
  • Zhong et al. (2012) W. Zhong, H. Lu, M.H. Yang, Robust object tracking via sparsity-based collaborative model, in: CVPR, pp. 1838–1845.