1 Introduction
Feature tracking in video is an important computer vision task, often used as the first step in finding structure from motion or simultaneous location and mapping (SLAM). The celebrated KanadeLucasTomasi algorithm [20, 25, 24] tracks feature points by searching for matches between templates representing each feature and a frame of video.
Despite many other alternatives and improvement, it is still one of the best video feature tracking algorithms [1].^{1}^{1}1Feature tracking should be distinguished from object tracking, where there has been significant progress in the development of novel algorithms that significantly improve previous efforts.
Acknowledgements: This work was supported by NSF award DMS0956072, the Sloan foundation, the University of Minnesota Doctoral Dissertation Fellowship Program, and the Feinberg Foundation Visiting Faculty Program Fellowship of the Weizmann Institute of Science.
Supp. web page: http://www.math.umn.edu/lerman/RCTracking/
However, there are several realistic scenarios when LucasKanade and many of its alternatives do not perform well: poor lighting conditions, noisy video, and when there are transient occlusions that need to be ignored. In order to deal with such scenarios more robustly it would be useful to allow the feature points to communicate with each other to decide how they should move as a group, so as to respect the underlying three dimensional geometry of the scene.
This underlying geometry constrains the trajectories of the track points to have a lowrank structure; see [12, 18] for the case when tracking a single rigid object under an affine camera model, and [6, 28, 17, 13] for nonrigid motion and the perspective camera. In this work we will combine the lowrank geometry of the cohort of tracked features with the successful nonlinear single feature tracking framework of Lucas and Kanade [20] by adding a lowrank regularization penalty in the tracking optimization problem. To accommodate dynamic scenes with nontrivial motion we apply our rank constraint over a sliding window, so that we only consider a small number of frames at a given time (this is a common idea for dealing with nonrigid motions [8, 23, 16]). We demonstrate very strong performance in rigid environments as well as in scenes with multiple and/or nonrigid motion (since the trajectories of all features are still low rank for short time intervals). We describe experiments with several choices of lowrank regularizers (which are local in time), using a unified optimization framework that allows real time regularized tracking on a single CPU core.
1.1 Relationship With Previous Work
Geometric structures (and lowrank structures in particular) have been effectively utilized for the problem of optical flow estimation. Irani
[19]showed how 3d constraints in the real world and various camera models imply the existence of a lowrank constraint on the flow problem. Brand extended Irani’s work to nonrigid motions, while developing a robust, subspaceestimating, flowbased tracker via an incremental singular value decomposition with missing values as well as by learning an object model
[5, 4, 3, 6]. Torresani et al. [28] also extended Irani’s work to nonrigid motions by applying rankbounds for recovering 3D nonrigid motions. More recently, Garg et al. [15] introduced hard subspace constraints for long range optical flow estimation in a variational scheme. Garg et al. [16] improved the performance of this former work by making the constraint weak (as an energy regularizer), using a robust energy term and allowing more general basis terms. At last, Ricco and Tomasi [23] proposed a Lagrangian approach for long range motion estimation that allows more reliable detection of occlusion. It estimates a basis for a lowdimensional subspace of the trajectories (as in [16]) and employs a variational method to solve for the bestfit coefficients of the motion trajectories in this basis.In optical flow estimation the goal is to find displacements of features between consecutive frames, while assuming that the flow field is locally nearly constant. Although the goal in the feature tracking problem is similar, it does not require estimating the flow by enforcing the brightness constancy constraint or a weaker version of it. The subspace constraints above were translated by Irani [19] to an image brightness constraint. However, small errors in the flow field in each frame from this approach lead to the accumulation of errors in the trajectories obtained by integrating the flow. These errors are unacceptable for tracking. Weak versions of this constraint for estimating flow along many frames (as in [6, 23, 16]) require rather dense trajectories, which represent continuous regions in the image frame. Indeed, they are based on either continuous variational methods [23, 16] (which often track all pixels in the image domain) or careful model estimation [6] (which requires sufficiently dense sampling from objects in the videos).
In tracking, one instead uses a formulation that allows for very precise feature registration (like the LucasKanade tracker [20]), and there is no need to linearize the image to solve an approximation to the feature displacement problem. It is desirable to have a sparse set of features and track them only in local neighborhoods to allow real time implementation. There is not a canonical method for introducing an explicit low rank constraint as in [19]. We will argue below that any strict subspace constraint is not ideal in the tracking problem and will promote a soft constraint. This soft constraint is different than the ones advocated for flow estimation in both [23] and [16] since they carefully learn local basis elements and require dense feature sampling.
Torresani and Bregler [26] suggested the partial application of hard lowrank constraints to improve tracking (applying rank bounds as in [28]). They rely on initial LucasKanade tracking [20] from which “reliable” features are identified and used to estimate a model for the scene. They used their constraint to retrack the “unreliable” features (the trajectories are now confined to a known subspace). Since they search in the space of trajectories, their minimization strategy is completely different than ours. Their tracker is also noncasual since it needs the full sequence to start tracking, so a realtime implementation is not possible. This work was extended in [27] to develop a causal tracker in the same spirit that also does not rely on a set of “reliable” feature tracks. However, both methods require setting the rank of the constraint apriori and they impose the constraint over very long time spans (up to the entire sequence), making the algorithms less applicable to dynamic scenes.
Buchanan and Fitzgibbon [8] continuously update a nonrigid motion model over a sliding temporal window. This motion model is used as a motion prior in a conventional Bayesian template tracker for a single feature. The local information is combined with weaker global low rank approximation for the set of initial local trajectories (in the spirit of [6, 23, 16], while different than the low rank constraint of this paper). Similarly to [6] this low rank constraint guides the tracking via Bayesian modeling.
Another line of work takes tracked feature points in videos, and then uses the underlying subspace structure of rigid bodies to segment different motions of such bodies [12, 30, 11, 31, 14]. This is related to the large body of work on recovering rigid or nonrigid structure from motion; see [18] or [13] and the references therein. However, these works are highly dependent on good tracking and it would be desirable to simultaneously track and segment motion, or exploit the subspace structure to improve tracking prior to finding structure from motion.
2 On LowRank Feature Trajectories
Under the affine camera model, the feature trajectories for a set of features from a rigid body should exist in an affine subspace of dimension , or a linear subspace of dimension 4 [12, 18]. However, subspaces corresponding to very degenerate motion are lowerdimensional than those corresponding to general motion [18].
Feature trajectories of nonrigid scenarios exhibit significant variety, but some lowrank models may still be successfully applied to them [6, 28, 17, 13, 16]. Similarly to [8, 16] (though in a different setting) we consider a sliding temporal window, where over short durations the motion is simple and the feature trajectories are of lower rank. The restriction on the length of feature trajectories can also help in satisfying an approximate local affine camera model in scenes which violate the affine camera model. In general, depth disparities give rise to lowdimensional manifolds [18] which are only locally approximated by linear spaces.
At last, even in the case of multiple moving rigid objects, the set of trajectories is still low rank (confined to the union of a few low rank subspaces). In all of these scenarios the low rank is unknown in general.
3 Feature Tracking
Notation: A feature at a location in a given frame of an video is characterized by a template , which is an subimage of that frame centered at (
is a small integer, generally taken to be odd, so the template has a center pixel). If
does not have integer coordinates,is interpolated from the image. We denote
and we parametrize so that its pixel values are obtained by .A classical formulation of the singlefeature tracking problem (see e.g., [20]) is to search for the translation that minimizes some distance between a feature’s template at a given frame and the next frame of video translated by ; we denote this next frame by . That is, we minimize the singlefeature energy function :
(1) 
where, for example, or . To apply continuous optimization we view as a continuous variable and we thus view and as functions over continuous domains (implemented with bilinear interpolation).
3.1 Low Rank Regularization Framework
If we want to encourage a low rank structure in the trajectories, we cannot view the tracking of different features as separate problems. For , let denote the position of feature in the current frame (in image coordinates), and let denote the joint state of all features in the scene. We define the total energy function as follows:
(2) 
where is the template for feature . Now, we can impose desired relationships between features in a scene by imposing constraints on the domain of optimization of (2).
Instead of enforcing a hard constraint, we add a penalty term to (2), which increases the cost of states which are inconsistent with lowrank motion. Specifically, we define:
(3) 
where is an estimate of, or proxy for, the dimensionality of the set of feature trajectories over the last several frames of video (past feature locations are treated as constants, so this is a function only of the current state, ). Notice that we have replaced the scale factor from (2) with the constant , as this coefficient is now also responsible for controlling the relative strength of the penalty term. We will give explicit examples for in section 3.2.
This framework gives rise to two different solutions, characterized by the strength of the penalty term (definition of ). Each has useful, realworld tracking applications. In the first case, we assume that most (but not necessarily all) features in the scene approximately obey a low rank model. This is appropriate if the scene contains nonrigid or multiple moving bodies. We can impose a weak constraint by making the penalty term small relative to the other terms. If a feature is strong, it will confidently track the imagery, ignoring the constraint (regardless of whether the motion is consistent with the other features in the scene). If a feature is weak in the sense that we cannot fully determine its true location by only looking at the imagery, then the penalty term will become significant and encourage the feature to agree with the motion of the other features in the scene.
In the second case, we assume that all features in the scene are supposed to agree with a low rank model (and deviations from that model are indicative of tracking errors). We can impose a strong constraint by making the penalty term large relative to the other terms. No small set of features can overpower the constraint, regardless of how strong the features are. This forces all features to move is a way that is consistent with a simple motion. Thus, a small number of features can even be occluded, and their positions will be predicted by the motion of the other features in the scene. We further explain these two scenarios and demonstrate them with figures in the supplementary material.
3.2 Specific Choices of the LowRank Regularizer
There is now a large body of work on low rank regularization, e.g., [10, 9, 21]. We will restrict ourselves to showing results using three choices for described below. Each choice we present defines in terms of a matrix . It is the matrix whose column contains the feature trajectory for feature within a sliding window of consecutive frames (current frame and past frames). Specifically, , where is the current (variable) position of feature and , contains the and pixel coordinates of feature from frames in the past (past feature locations are treated as known constants). One may alternatively center the columns of by subtracting from each column the average of all columns. Most constraints derived for trajectories (assuming, for instance, rigid motion) actually confine trajectories to a low rank affine subspace (as opposed to a linear subspace). Centering the columns of transforms an affine constraint into a linear one. Alternatively, one can forgo centering and view an affine constraint as a linear constraint in one dimension higher. We report results for both approaches.
Explicit Factorizations
A simple method for enforcing the structure constraint is to write , where is a matrix, and is a matrix. However, as mentioned in the previous section, because the feature tracks often do not lie exactly on a subspace due to deviations from the camera model or nonrigidity, an explicit constraint of this form is not suitable.
However, an explicit factorization can be used in a penalty term by measuring the deviation of , in some norm, from its approximate low rank factorization. For example, if we let
(4) 
denote the SVD of , we can take in (3) to be , where is the first three or four columns of , and is the first three or four rows of . Then this corresponds to penalizing via , where is the ’th singular value of . As above, since the history is fixed, , , and are functions of .
Nuclear Norm
A popular alternative to explicitly keeping track of the best fit lowdimensional subspace to is to use the matrix nuclear norm and define
(5) 
This is a convex proxy for the rank of (see e.g., [10, 9]). Here
is the vector of singular values of
, and is the norm. Unlike explicit factorization, where only energy outside the first principal components of is punished, the nuclear norm will favor lowerrank over higherrank even when both matrices have rank . Thus, using this kind of penalty will favor simpler track point motions over more complex ones, even when both are technically permissible.Empirical Dimension
Empirical Dimension [22] refers to a class of dimension estimators depending on a parameter . The empirical dimension of is defined to be:
(6) 
Notice that we use norm notation, although is only a pseudonorm. When , this is sometimes called the “effective rank” of the data matrix [29].
Empirical dimension satisfies a few important properties, which are verified in [22]. First, empirical dimension is invariant under rotation and scaling of a data set. Additionally, in the absence of noise, empirical dimension never exceeds true dimension, but it approaches true dimension as the number of measurements goes to infinity for spherically symmetric distributions. Thus, is a true dimension estimator (whereas the nuclear norm is a proxy for dimension). To use empirical dimension as our regularizer, we define .
Empirical dimension is governed by its parameter, . An near results in a “strict” estimator, which is appropriate for estimating dimension in situations where you have little noise and you expect your data to live in true linear spaces. If is near then is a lenient estimator. This makes it less sensitive to noise, and more tolerant of data sets that are only approximately linear. In all of the experiments we present, we use , although we found that other tested values also worked well.
3.3 Implementation Details
We fix for the sliding window and let in (3). We use this form for
so that all terms in the total energy function behave linearly in a known range of values. If our fit terms behaved quadratically, it would be more challenging to balance them against a penalty term. We also tested a Huber loss function for
and have concluded that such a regularization is not needed.We fix a parameter for each penalty form (selected empirically  see the supplementary material for our procedure), which determines the strength of the penalty. The weak and strong regularization parameters are set as follows:
(7) 
The weak scaling implies that a perfectlymatched feature will contribute to the total energy, and a poorlymatched feature will contribute an amount on the order of to the total energy. The penalty term will contribute on the order of to the total energy. Since we do not divide the contributions of each feature by the number of features, the penalty terms contribution is comparable in magnitude to that of a single feature. The strong scaling implies that the penalty term is on the same scale as the sum of the contributions of all of the features in the scene.
Minimization Strategy
The total energy function we propose for constrained tracking is nonconvex since the contributions from the template fit terms are not convex (even if is convex); this is also the case with other feature tracking methods, including the LucasKanade tracker. We employ a order descent approach for driving the energy to a local minimum.
To reduce the computational load of feature tracking, some trackers use order methods for optimization (see [1]). This works well when tracking strong features, but in our experience it can be unreliable when dealing with weak or ambiguous features. Since we are explicitly trying to improve tracking accuracy on poor features we opt for a order descent approach instead.
The simplest order descent method is (sub)gradient descent. Unfortunately, because there can be a very large difference in magnitude between the contributions of strong and weak features to our total energy, our problem is not wellconditioned. If we pursue standard gradient descent, the strong features dictate the step direction and the weak features have very little effect on it. Ideally, once the strong features are correctly positioned, they will no longer dominate the step direction. If we were able to perfectly measure the gradient of our objective function, this would be the case. In practice, the error in our numerical gradient estimate can be large enough to prevent the strong features from ever relinquishing control over the step direction. The result is that in a scene with both very strong and very weak features, the weak features may not be tracked.
To remedy this, we compute our step direction by blending the gradient of the energy function with a vector that corresponds to taking equalsized gradient descent steps separately for each feature. We use a fast line search in each iteration to find the nearest local minimum in the step direction. This compromise approach allows for efficient descent while ensuring that each feature has some control over the step direction (regardless of feature strength).
Because the energy is not convex, it is important to choose a good initial state. We use a combination of two strategies to initialize the tracking: first, we generate our initial guess of by registering an entire frame of video with the previous (at lower resolution). Secondly, we use multiresolution, or pyramidal tracking so that approximate motion on a large scale can help us get close to the minimum before we try tracking on finer resolution levels (see [2]).
We now explain the details of the algorithm. Let denote a full new frame of video and let be the concatenation of feature positions in the previous frame. We form a pyramid for where level is the fullresolution image and each higher level ( through ) has half the vertical and half the horizontal resolution of level . To initialize the optimization, we take the full frame (at resolution level ) and register it against the previous frame (also at resolution level ) using gradient descent and an absolute value loss function. We initialize each features position in the current frame by taking its position in the previous frame and adding the offset between the frames, as found through this registration process). Once we have our initial , we begin optimization on the top pyramid level. When done on the top level, we use the result to initialize optimization on the level below it, and so on until we have found a local minimum on level . On any given pyramid level, we perform optimization by iteratively computing a step direction and conducting a fast line search to find a local minimum in the search direction. We impose a minimum and maximum on the number of steps to be performed on each level ( and , respectively). Our termination condition (on a given level) is when the magnitude of the derivative of is not significantly smaller than it was in the previous step. To compute our search direction in each step, we first compute the gradient of (which we will call ) and set . We then compute a seminormalized version of . This is done by breaking it into a collection of vectors (elements and are together, elements and are together, and so on) and normalizing each of them. We then recombine the normalized vectors to get . We blend with to compute our step direction. Algorithm 1 summarizes the full process. Source code for our implementation of this algorithm will be available on the first authors web page.
Efficiency and Complexity
We have found that our algorithm typically converges in about 20 iterations or less at each pyramid level (with fewer iterations on lower pyramid levels). In our experiments, we used a resolution of by (we have also done tests at ), and we found that 4 pyramid levels were sufficient for reliable tracking. Thus, on average, less than 80 iterations are required to track from one frame to the next. A single iteration requires one gradient evaluation and multiple evaluations of . The complexity of a gradient evaluation is , and the complexity of an energy evaluation is (details are given in the supplementary material). Our C++ implementation (which makes use of OpenCV) can run on features of size by with a temporal window of 6 frames ()^{2}^{2}2Accuracy for is only slightly worse than for and enables faster processing. See the supp. material for a brief comparison. on a generation Intel i5 CPU at approximately 16 frames per second. SIMD instructions are used in places, but no multithreading was used, so faster processing rates are possible. With a larger window of our algorithm still runs at  frames per second.
4 Experiments
To evaluate our method, we conducted tests on several real video sequences in circumstances that are difficult for feature tracking. These included shaky footage in lowlight environments. The resulting videos contained dark regions with few good features and the unsteady camera motion and poor lighting introduced timevarying motion blur.
In these video sequences it proved very difficult to handregister features for groundtruth. In order to present a quantitative numerical comparison we also collected higherquality video sequences and synthetically degraded their quality. We used a standard LucasKanade tracker on the nondegraded videos to generate groundtruth (the output was humanverified and corrected). We therefore present qualitative results on real, lowquality video sequences, as well as quantitative results on a set of synthetically degraded videos.
Video Number  Average  

1  2  3  4  5  6  7  8  
Tracker  KLT  959.6  2484.0  958.3  1242.4  1630.2  1391.4  2105.0  4387.6  1894.8 
1stOrder Descent  92.5  137.8  159.5  273.2  87.5  198.4  70.6  685.7  213.2  
LDOF  508.0  408.6  898.5  385.2  104.9  122.3  256.1  721.3  425.6  
MultiTracker  Emp Dim  Uncentered  104.1  139.3  128.8  241.9  75.2  136.9  58.8  305.5  148.8  
MultiTracker  Emp Dim  Centered  102.9  115.3  108.3  226.8  69.7  128.6  54.1  292.5  137.3  
MultiTracker  Nuc Norm  Uncentered  106.8  134.0  131.7  243.9  73.4  132.6  58.4  293.9  146.8  
MultiTracker  Nuc Norm  Centered  103.5  137.7  141.5  243.9  73.4  135.5  60.3  341.0  154.6  
MultiTracker  Exp Fact  Uncentered  103.4  169.7  131.3  246.1  74.6  134.3  62.5  307.3  153.7  
MultiTracker  Exp Fact  Centered  102.9  167.0  129.1  245.0  73.2  133.4  58.9  302.5  151.5 
Video Number  Average  

1  2  3  4  5  6  7  8  
Tracker  KLT  10.6  7.8  13.5  9.6  7.6  9.5  7.8  2.0  8.6 
1stOrder Descent  38.9  30.3  68.7  27.7  34.0  41.1  44.1  3.9  36.1  
LDOF  8.8  12.0  13.7  20.4  25.5  68.4  23.1  6.7  22.3  
MultiTracker  Emp Dim  Uncentered  73.4  35.3  111.5  55.6  70.2  102.3  63.7  14.0  65.8  
MultiTracker  Emp Dim  Centered  74.3  38.2  111.5  58.5  69.1  104.4  65.6  14.0  67.0  
MultiTracker  Nuc Norm  Uncentered  77.2  35.3  108.4  53.8  62.1  105.5  68.5  13.6  65.6  
MultiTracker  Nuc Norm  Centered  78.2  33.8  114.9  54.3  66.9  109.0  65.6  13.8  67.1  
MultiTracker  Exp Fact  Uncentered  76.2  34.3  114.9  54.3  66.9  98.3  68.5  13.6  65.9  
MultiTracker  Exp Fact  Centered  74.3  34.8  111.5  53.0  68.0  109.0  65.6  14.3  66.3 
4.1 Qualitative Experiments on Real Videos
In our tests on real video sequences containing lowquality features, singlefeature tracking does not provide acceptable results. When following a nondistinctive feature, the singlefeature energy function often flattens out in one or more directions. A tracker may move in any ambiguous direction without realizing a better or worse match with the features template. This results in the tracked location drifting away from a features true location (i.e. “wandering”). This is not a technical limitation of one particular tracking implementation. Rather, it is a fundamental problem due to the fact that the local imagery in a small neighborhood of a feature does not always contain enough information to deduce the features motion between frames. This claim can be verified by attempting to handregister lowquality features by only looking at a small neighborhood of the features last known location.
In these situations, our method infers the global motion of the scene from the observable features and uses it to assist in locating the lowquality features. This yields better overall tracking results in hardtotrack videos. Fig. 6 shows characteristic results from our tests. Several real videos are included in the supplemental material with results from the OpenCV Lucas Kanade tracker, and from our method.
4.2 Experiments on Synthetically Degraded Videos
For this experiment, we collected 8 video sequences of variable length in favorable lighting conditions. We used a Lucas Kanade tracker to track many features and manually verified and corrected the individual trajectories. Features come and go in these sequences (we do not assume all features persist through the entire sequence). These videos include rigid environments as well as one video with multiple rigid bodies (video ) and one video with a deformable body (video ). The sequences range in length from frames to frames. On average, they are frames each and contain over featureframes each (this is the sum of each tracked features lifespan, measured in frames).
We degraded each video sequence by first darkening and adding noise to each frame, followed by applying a strong Gaussian blur to each frame. After this we added additional Gaussian noise. Adding noise before and after blurring gave the effect of noise at different scales (harder to deal with than perpixel noise only). The test videos are included in the supplementary material.
For our comparison, we ran each tracker in two different modes. In the first mode we initialized each feature with its groundtruth location and reinitialized features when they wandered more than pixels from ground truth. We recorded the average number of frames between feature reinitializations. In the second mode, we only tracked the features that were visible in frame 0, and features were never reinitialized. We looked at the mean difference between the output trajectories and ground truth after frames.
As a reference, we compared against the pyramidal Lucas Kanade tracker in the current OpenCV release (2.4.3). For a more recent comparison, we used LDOF (Large Displacement Optical Flow [7]) to generate dense flow fields for each sequence and we interpolated these flow fields to generate longrun trajectories for features. We also implemented our own singlefeature gradient descent tracker (with an absolute value loss function). We present results for our rankconstrained tracker with the three previously introduced penalty functions. For each penalty function we present results with and without centering the history matrix . In this experiment, whenever our algorithm is run with the penalty term, we use a weak constraint. All trackers were run on grayscale video. The results of this experiment are presented in Tables 1 and 2. An additional set of tests (on shorter video sequences) is included in the supplementary material.
4.3 Analysis of Results
From Tables 1 and 2, we can see that imposing our weak rank constraint significantly improves overall tracking ability, with all three rank regularizers that we tested showing improved tracking performance. Comparing the Lucas Kanade results to the results of our singlefeature gradient descent tracker, we see a very large gap in performance. The core differences between these two algorithms are the definitions of (squared error vs. absolute value) and the method of optimization employed. This performance difference supports our previous claim that the order optimization technique used to accelerate convergence in the Lucas Kanade algorithm can be unreliable when tracking poorquality features.
5 Conclusion
Rank constraints have been successfully applied to several problems in computer vision, including motion segmentation and optical flow estimation. We have expanded on this previous work by developing a feature tracking framework which allows these constraints to be reliably used to assist in the tracking of features in rigid environments as well as in more general, nonrigid settings. The framework we presented permits these constraints to be imposed forcefully, allowing one to track features on a rigid object even if some features are occluded, or weakly, where the constraints are only used to help locate poorquality features that cannot be tracked on their own. We showed that the weak constraint can yield significant gains in tracking performance, even in nonrigid scenes (with multiple or deformable objects) The framework we presented is completely causal and does not require explicitly modeling structure or motion in a scene. Furthermore, the algorithm we proposed is not prohibitively computationally expensive (realtime performance has been achieved). Our results provide evidence that when tracking features in lowquality video (especially in a rigid or semirigid scene), a order descent scheme is more robust than order methods used in standard LucasKanade trackers, and applying rank regularizers to track a cohort of features results in better performance than classical singlefeature tracking.
References
 [1] S. Baker and I. Matthews. Lucaskanade 20 years on: A unifying framework. IJCV, 56(3):221–255, 2004.
 [2] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani. Hierarchical modelbased motion estimation. In ECCV, volume 588, pages 237–252. 1992.
 [3] M. Brand. Morphable 3d models from video. In CVPR, volume 2, pages II–456–II–463, 2001.
 [4] M. Brand. Incremental singular value decomposition of uncertain data with missing values. In ECCV, volume 2350, pages 707–720. 2002.
 [5] M. Brand. Subspace mappings for image sequences. In Proc. Workshop Statistical Methods in Video Processing, 2002.
 [6] M. Brand and R. Bhotika. Flexible flow for 3d nonrigid tracking and shape recovery. In CVPR, 2001.
 [7] T. Brox and J. Malik. Large displacement optical flow: descriptor matching in variational motion estimation. IEEE TPAMI, 33(3):500–513, 2011.
 [8] A. Buchanan and A. W. Fitzgibbon. Combining local and global motion models for feature point tracking. In CVPR, 2007.

[9]
E. J. Candès, X. Li, Y. Ma, and J. Wright.
Robust principal component analysis?
J. ACM, 58(3):11, 2011.  [10] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. FoCM, 9:717–772, 2009.
 [11] G. Chen and G. Lerman. Spectral curvature clustering (SCC). IJCV, 81(3):317–330, 2009.
 [12] J. Costeira and T. Kanade. A multibody factorization method for independently moving objects. IJCV, 29(3):159–179, 1998.
 [13] Y. Dai, H. Li, and M. He. A simple priorfree method for nonrigid structurefrommotion factorization. In CVPR, pages 2018–2025, 2012.
 [14] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. PAMI, PP(99):1–1, 2013.
 [15] R. Garg, L. Pizarro, D. Rueckert, and L. de Agapito. Dense multiframe optic flow for nonrigid objects using subspace constraints. In ACCV, pages 460–473. 2010.
 [16] R. Garg, A. Roussos, and L. Agapito. A variational approach to video registration with subspace constraints. IJCV, 104(3):286–314, 2013.
 [17] R. Hartley and R. Vidal. Perspective nonrigid shape and motion recovery. In ECCV (1), pages 276–289, 2008.
 [18] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.
 [19] M. Irani. Multiframe correspondence estimation using subspace constraints. IJCV, 48(3):173–194, 2002.
 [20] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In IJCAI, pages 674–679, 1981.
 [21] S. Negahban, P. D. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of estimators with decomposable regularizers. Stat. Science, 27(4):538–557, 2012.
 [22] B. Poling and G. Lerman. A new approach to twoview motion segmentation using global dimension minimization. arXiv:1304.2999, 2013.
 [23] S. Ricco and C. Tomasi. Dense lagrangian motion estimation with occlusions. CVPR 2012, 0:1800–1807, 2012.
 [24] J. Shi and C. Tomasi. Good features to track. In CVPR, pages 593–600. IEEE, 1994.
 [25] C. Tomasi and T. Kanade. Detection and tracking of point features. Technical Report CMUCS91132, 1991.
 [26] L. Torresani and C. Bregler. Spacetime tracking. In ECCV (1), pages 801–812, 2002.
 [27] L. Torresani, A. Hertzmann, and C. Bregler. Robust modelfree tracking of nonrigid shape. Technical report, Technical Report TR2003840, New York University, 2003.
 [28] L. Torresani, D. Yang, E. Alexander, and C. Bregler. Tracking and modeling nonrigid objects with rank constraints. In CVPR, volume 1, pages I–493–I–500 vol.1, 2001.

[29]
R. Vershynin.
Introduction to the nonasymptotic analysis of random matrices.
In Compressed sensing, pages 210–268. Cambridge Univ. Press, Cambridge, 2012.  [30] J. Yan and M. Pollefeys. A general framework for motion segmentation: Independent, articulated, rigid, nonrigid, degenerate and nondegenerate. In ECCV, volume 4, pages 94–106, 2006.
 [31] T. Zhang, A. Szlam, Y. Wang, and G. Lerman. Hybrid linear modeling via local bestfit flats. IJCV, 100:217–240, 2012.
Comments
There are no comments yet.