1 Introduction
The identification of regions of interest is typically the critical preprocessing step for various highlevel computer vision applications, including event detection, video surveillance, human motion analysis, etc. Background subtraction is a widelyused technique to perform pixelwise segmentation of foreground regions out of background scenes. Unlike foreground object detection algorithms, background subtraction methods typically produce much more accurate segmentation of foreground regions rather than merely detection bounding boxes, without the need to train individual object detectors. A great number of various traditional background subtraction methods and algorithms have been proposed
[34, 8, 21, 32, 12, 28]. Most of these methods focused on modeling background under the assumption that the camera is stationary. However, more and more videos are captured from moving platforms, such as camera phones, and cameras mounted on ground vehicles, robots, ariel drones, etc. Traditional background subtraction algorithms are no longer applicable for such videos captured from a nonstationary platform [7]. The exponentially increasing use of moving platforms for video capture introduces a high demand for the development of general background subtraction algorithms that are not only as effective as traditional background subtraction but also applicable to movingcamera videos.Similar to most video segmentation methods, a few works [26, 5] resort to processing the whole video offline. Offline methods can typically produce good results on short sequences since the information in latter frames can significantly benefit the segmentation in the earlier frames. However, since it needs to store and process the information over the whole video, the memory and computational cost increase exponentially as the number of frames to process grows [7]. Additionally, in various cases, such as video surveillance and security monitoring, videos need to be analyzed as they being streamed in real time, where an efficient online background subtraction method is greatly in demand.
The key to handling long sequences in an online way is to learn and maintain models for the background and foreground layers. Such models accumulate and update the evidence over a large number of frames and also supply valuable knowledge foundation to highlevel vision tasks. Recently, a few online background subtraction methods with moving cameras have been proposed [31, 20, 9, 22, 41]. Most methods formulate it as a binary segmentation problem with the assumption of only one foreground object, naturally resulting in bad segmentation when multiple moving objects appear in the scene. Especially in the case where objects go across each other, motion estimation for objects suffers great confusion, which further degrades the performance of background subtraction.
To remedy this drawback, we propose a general multilayerbased framework with the capability of handling multiple foreground objects in the scene. The objects can be automatically detected based on motion inconsistency and an independent processing layer is assigned to every foreground object as well as the background. In each layer, we would like the same process to be preformed concurrently inside the “processing block”, which takes the accumulated information and the new evidence of each layer as input, and outputs a probability map indicating the confidence of pixels belonging to each layer. In this paper, we elaborately design such a “processing block” with three steps as follows. (a) Motion model estimation is first performed based on Gaussian Belief Propagation [1]
with motion vectors of corresponding trajectories as the evidence. (b) The appearance model and the prior probability are predicted by propagating the previous one with the estimated motion model. (c) Given the current frame as new evidence, Kernel Density Estimation
[8] is employed to infer the probability map as the output. Finally, based on the collection of probability maps produced by each “processing block”, the pixelwise segmentation for the current frame is generated by Multilabel Graphcut.Besides, since it is the first work to tackle multilabel background subtraction problem in moving camera scenarios, to our best knowledge, we also design a methodology to evaluate the performance and show our method outperforms the stateoftheart methods.
2 Related Work
Motion Estimation and Compensation: The freely moving camera introduces a movement in the projected background scene, and thus complicates the background subtraction problem. An intuitive idea to tackle such a problem is compensating the camera motion. A few pioneering works resort to estimating a homography [13] that characterizes the geometric transformation of background scene between consecutive frames. Typically RANSAC [11] and its variants MLESAC [36] are employed to achieve robust estimation using many matches of feature points. Jin [15] model the scene as a set of planer regions where each background pixel is assumed to belong to one of these regions. Homographies are used to rectify each region to its corresponding planer representation in the model. Zamalieva [41] leverage geometric information to develop multiple transformation models, and choose one that best describes the relation between consecutive frames.
Recently, motion estimation has been widely employed to comprehensively specify the motion for every pixel [25, 2, 20, 22]. These works [20, 22] used optical flow as the evidence. Kwak [20] divided the images into small blocks in a grid pattern, and employed nonparametric belief propagation to estimate the motion field based on average optical flow of each block. Its following work [22] improved the quality of motion estimation by replacing blocks with superpixels as the model unit. On the other hand, in [25, 2], optical flow orientations were claimed independent of object depth in the scene, and used to clusters pixels that have similar realworld motion, irrespective of their depth in the scene. However, high dependency on the optical flow makes these methods susceptible to the noise in the estimation of optical flow. In contrast, our method improves motion model estimation by employing Gaussian Belief Propagation [1] with the motion vectors of sparse feature points as more robust evidence.
Appearance Modeling: Traditionally, statistical representations of the background scene have been proposed to estimate spatially extendable background models. Hayman [14] built a mixture of Gaussian mosaic background model. Ren [29]
used motion compensation to predict the position of each pixel in a background map, and model the uncertainty of that prediction by a spacial Gaussian distribution. The construction of image mosaic associated with a traditional mixture Gaussian background model was also claimed to be effective in
[23, 30]. However, the hyperparameter required by this parametric model restricts its adaptability and application. On the contrary, we employ nonparametric Kernal Density Estimation method
[8] to build models of the appearance of foreground and background regions, making our approach more stable and applicable.Layered Representation: The layered representation, referring to approaches that model the scene as a set of moving layers, has been used for foreground detection [27, 16], motion segmentation [39, 37, 19]. In [27], the background was modeled as the union of nonparametric layermodels to facilitate detecting the foreground under static or dynamic background. Kim [16] proposed a layered background model where a longterm background model is used besides several multiple shortterm background models. Wang [39] used an iterative method to achieve layeredmotion segmentation. Torr [37] modeled the layers as planes in 3D and integrating priors in a Bayesian framework. [19] models spatial continuity while representing each layer as composed of a set of segments. A common theme of these layered models is the assumption that the video is available beforehand [7]. Such an assumption prevents the use of such approaches for processing videos from streaming sources. Some dynamic textures methods [3, 4, 24] also employed the layered model to tackle the complex dynamic background, but with stationary cameras. To the best of our knowledge, the proposed method is the first layered model applied in the moving camera scenarios.
The organization of this paper is as follows. The overview of our proposed framework are presented in Section . Section describes the trajectory labeling. The components inside “processing block” are described in Section and the final pixelwise multilabeling are presented in section . Finally, Section illustrates quantitatively and qualitatively experimental results based on two criteria.
3 Framework Overview
The proposed framework is demonstrated in Figure 1. First we employ the feature point tracking method presented in [35] to generate feature trajectories. Then the generated trajectories are clustered into different layers based on motion inconsistency, and the labels of trajectories are continuously propagated over frames using the dynamic label propagation [42]. Each cluster of trajectories is assigned to the corresponding layer, and the number of layers is adapted according to that of foreground objects appearing in each frame.
Each layer possesses an independent “processing block” that produces a posterior probability map. Let
and denote the index of the layer and the frame respectively. Inside the “processing block”, we have two sources of input: the appearance model and the prior probability map of labels produced from the previous frame, and a group of corresponding sparse trajectories . The first step is the inference of the motion model in each layer using Gaussian Belief Propagation with the motion vectors of corresponding trajectories as the evidence. Then, the new appearance model is obtained by shifting the previous appearance model based on the estimated motion model , and the new prior probability map can be inferred from the previous one in the same way.Given the current frame as the new observation, the likelihood is estimated by Kernal density Estimation(KDE) [8] with the propagated appearance model in every layer. Then the posterior probability map for each layer is inferred as
(1) 
where is the partition function. With a collection of the posterior probability maps from each layer, we achieve the final pixelwise labeling by optimizing a cut on a multilabel graph with the minimal cut energy. At the end of the whole process, appearance models are updated with the current frame and labels. In the following sections, we will describe the process steps in detail.
4 Trajectory Labeling and Propagation
The feature point tracking method [35] we employ has achived a good performance in feature trajectories generation. To cluster trajectories, several motion segmentation methods [26, 40] have provided good solutions, but may fail when the video does not meet the assumption of the affine camera model. To get rid of such an assumption, an online method proposed by Elqursh [10] considered sparse trajectory clustering as the problem of manifold separation and dynamically propagate labels over frames. Inspired by [10], we first cluster the trajectories in the initialization frames, and continuously propagates the label of trajectories over frames using the dynamic label propagation [42]. We briefly describe the algorithm here.
4.1 Trajectory Clustering
Given trajectories, two distance matrices and are defined to represent the difference between trajectories in motion and spatial location. The entries and are the distances between ith and jth trajoctories over frames up to . For detailed defination, please refer to [26]
. The affinity matrix over
trajectories is then formulated as(2) 
where is the paramater to balance two distances.
Considering each trajectory as a node and the affinity matrix as the edge weights, we cast the trajectory clustering to graph cut problem. Starting from the initial cluster that contains all trajectories, normalized cuts [33] are employed to perform optimal binary cut on initial cluster and again on the generated clusters. This recursive process continues until the evaluated normalized cut cost on the cluster is above the threshold( in our work), which indicates this cluster of trajoctories belongs to the same component (i.e. objects or the background), and needs no further splitting. All trajectories are assigned with labels according to which cluster they belong to.
4.2 Label Propagation
With the labels of trajectories in the intial frames, the label propagation, as a semisupervised learning method, is adopted to infer the labels of trajectories in subsequent frames. We first construct a graph
with the trajectories in the previews and current frames as the labeled and unlabeled nodes respectively. The affinity matrix involving the labeled and unlabeled trajectories are calculated and used as edge weights between corresponding nodes. Let and denote the labeling probability matrix corresponding to labeled and unlabeled node respectively. Each row of is a onehot vector with one at the location cooresponding to the label of node and zero otherwise. To estimate the labeling probability matrix , we apply Markov random walks on the graph [43]. A transition matrix is defined as , where the entry is the transition probabilities from node to node. Given the partition of nodes into labeled and unlabeled nodes, the matrix P can be splitted into 4 blocks:(3) 
The closed form solution of the labeling probability matrix of unlabeled node is . The label for the node can be obtained by
(4) 
where is the entry in the row corresponding to node .
It is worth noting that the label propagation algorithm can only assign the trajectories with known labels. However, when new objects move into the scene, new labels should be introduced in time. To accomplish it, after the labels are predicted in each frame, a normalized cut cost in each cluster is evaluated. A small cost indicates a great intracluster variation inside the cluster. If the cost is below the threshold ( in our work), the cluster should be further splitted and a new label is assigned to the cluster with more different appearance from the previous one. When an object moves out of the scene, few trajetories are assigned with the corresponding label and the corresponding cluster is removed. In this way, the number of clusters changes adaptively according to how many moving objects appear in the scene. After clustering process is done, all trajectories in each cluster are further assigned to each layer.
5 Motion Model Estimation
Although providing the motion information only at sparse pixels, trajectories of feature points are more accurate and less noisy compared with optical flow. With these accurate motion vectors of trajectories as evidence, we can estimate the motion field model for the whole frame (i.e. to estimate the motion vector of every pixel in each layer). To accomplish the pixelwise motion estimation, we construct a pairwise MRF on a grid structure, where each vertex represents a pixel and the set of edges represents pairwise neighborhood relationship on this structure. Two sets of potentials are involved: edge potentials measuring the similarity between neighborhood vertices, and selfpotentials measuring the likelihood of evidence. If these potentials are defined as Guassion distribution, the task is thus formulated as a Gaussian Belief Propagation(GaBP) problem [1]. Given the motion vectors of trajectories in the kth
layer, the conditional joint probability distribution of motion model can be inferred as:
(5) 
where donates the set of feature points along with trajectories that are clustered to kth layer. The edge and self potentials are defined as and , where represents the motion vector of corresponding trajectories associated with ith pixel. Our formulation encourages the similarity between motion vectors of neighborhood points and that between estimated motion vectors of feature points and the evidence (i.e. motion of trajectories). According to GaBP, this equation can be rewritten as
(6) 
where the inverse covariance matrix is defined to show the connection of every pair of nodes, and the shift vector is defined by the motion of trajectory. A closed form solution for marginal posterior probability is , where is the ith entry of and is the entry . The estimated motion field is demonstrated in Figure 2.
6 Bayesian Filtering
The inference of probability maps can be performed as the sequential Bayesian filtering on the Hidden Markov Model. In this section, we first predict appearance model and prior probability for each layer given the motion models, followed by the inference of the posterior probability with new observations in the current frame. For simplicity of the expression, we remove the subscript
in the rest of paper. Note that the processes in each layer are identical.6.1 Model Propogation
Given the probability distribution of the estimated motion model
, we can estimate the appearance model and the probability map in the current frame by propagating the corresponding model and map from the previous frame. Specifically, the motion vector describes exactly how a pixel shifts between the consecutive frames. Therefore, armed with the motion information of a pixel in the current frame, we can easily obtain the appearance of the pixel by propagating that of the corresponding pixel in the previous frame. However, the motion vector of each pixel has Gaussian distribution over twodimentional space. It is impractical to marginalize the appearance over the whole Gaussian distribution. If we discard the uncertainty in the motion vector and set the variance to 0, the gaussion distribution is reduced to a Dirac delta function
. Then the marginalization over the whole appearance model is reduced to the involvement of the particular pixel. The appearance model of ith pixel can be readily obtained by(7) 
where the function is to obtain the corresponding index in the previous frame by reversely shifting the ith pixel according to the associated motion vector . In practice, we found this approximation have little performance degradation while saving much computational cost.
Similarly, the prior probability of ith pixel in each layer is obtained by
(8) 
6.2 Probability Map Update
Once the appearance model and probability map are propagated from the previous frame, the posterior probability map of each layer can be inferred with the current Frame as the new observation. With the assumption of independence between pixels, the posterior probability of ith pixel is computed by Eq. (9).
(9) 
The likelihood of the observation in ith pixel is essentially describing how well the appearance fits the appearance model in each layer . Kernel Density Estimation(KDE), as a nonparametric method, can effectively model the appearance for background or foreground without the restriction of hyperparameter estimation. For its simplicity and effectiveness, we employ KDE technique for appearance modeling. The appearance model of ith pixel is involved with a pool of color features accumulated from the previews frames. The likelihood of the observation estimated as:
(10) 
where is the Gaussian kernel function, is the color feature in frame stored for KDE modeling, and is the number of stored previous frames. The posterior probability map produced in each layer is normalized by partition function, and then used as the knowledge for multilabel segmentation.
7 Multilabel Segmentation
With the collection of normalized probability maps of all layers at hand, our final task for foreground objects segmentation is to perform pixelwise labeling for the whole frame. This segmentation problem can be converted into an energy minimization problem on the pairwise MRF which can be polynomially solvable via Graph Cuts [17]. Due to illposed nature of the segmentation problem, regularizations are always required. For our problem, we designed two regularizers: a smoothing cost and a label cost in preference of smoothness of labeling and fewer unique labels assigned, respectively. The global energy function is formulated as:
(11)  
where denotes the set of unique labels of . The three terms in the rightside of equation are data cost, smoothness cost and label cost, respectively. The data cost is defined as the negative log probability of pixel belonging to a certain layer . The smoothness cost is defined as the similarity of two neighboring pixels , and , as a sign function, equals to if , have the same label; otherwise . Moreover, in the term of label cost, is the nonnegative label cost of the label . And is the corresponding indicator function with the definition if and otherwise. are nonnegative parameters to balance data cost against such two regularizations (both are 1 in our work). The energy minimization function can be solved efficiently using the method presented in [6].
It is worth noting that when a new foreground object appears in the scene or initial frames are processed, the feature samples to build the appearance model usually are not sufficient for Kernel Density Estimation. In these cases that KDE is invalid, the probability maps is no longer avaiable as the data cost for the multilabel segmentation. An alternative way to define the term of data cost in energy function is based on the labeled trajectories. if and otherwise, where is a negative constant, and is the label of trajectory associated with ith pixel. This defination simply ultilizes the motion informance rather than the appearance model.
Finally, according to the labels of pixels, the appearance models are updated by adding the color feature in the current frame to the corresponding appearance model.
8 Experiments
We evaluate our algorithm qualitatively and quantitatively based on two criteria: one is the normal twolabel background subtraction and one is multilabel background segmentation. The former is evaluated using Fscore, which is the common measurement. For the latter, since, to our best knowledge, no one has done such work before, we carefully design a reasonable measurement. The result shows our method outperforms the stateoftheart approaches in both settings.
Dataset: Experiments were run on two public datasets. A set of sequences (cars18, people12) in the Hopkins dataset [38] is commonly used for quantitative evaluation on this topic, some of which contain multiple foreground objects. To quantitatively evaluate the performance, we produced the groundtruth mask manually for all frames, including the discrimination of foreground objects. Another one is Complex Background Dataset(CBD) [25], where the complex background and camera rotation introduce a great challenge.
MLBS  GBSSP  FOF  OMCBS  GBS  BSFMC  
cars1  92.04  87.14  50.84  91.77  80.30  73.13 
cars2  90.16  82.17  56.60  69.13  68.45  55.68 
cars3  93.16  72.94  73.57  41.27  79.22  60.91 
cars4  91.55  88.24  47.96  73.65  66.63  54.81 
cars5  86.62  81.66  70.94  60.44  74.56  51.97 
cars6  92.23  81.44  84.34  90.31  73.34  37.56 
cars7  91.17  90.86  42.92  89.87  69.10  40.35 
cars8  85.93  86.85  87.61  83.84  80.29  62.42 
people1  81.38  81.21  69.59  64.04  80.19  34.25 
people2  94.34  84.74  88.40  89.32  81.40  64.92 
drive  65.95  53.55  61.80  13.68  5.18  2.02 
forest  72.20  91.44  31.44  42.99  23.19  16.76 
parking  83.66  68.97  73.19  11.47  11.02  13.05 
store  86.28  83.44  70.74  10.18  21.42  8.83 
traffic  48.19  31.31  71.24  41.49  24.14  27.49 
Overall  83.66  77.73  65.45  58.23  55.90  40.28 
8.1 Twolabel Background Subtraction
The performance of our framework, represented as MLBS (MultiLayer Background Subtraction), is compared to five stateoftheart algorithms: GBSSP [22], FOF [25], OMCBS [9], GBS [20], and BSFMC [31]. GBS requires the initialization labels of each frame as additional inputs, and GBSSP needs the groundtruth of the first frame as the initialization. For fair comparisons, we provide GBS with the labeling results of BSFMC, and offer GBSSP the labeling result of the first frame generated by our method. Note that instead of the requirement of these additional informations, our MLBS method completes selfinitialization automatically. Moreover, we use the parameters provided by authors in external methods and fixed them for each algorithm in all experiments.
The quantitative comparisons on two dataset are shown in Table 1. It can be seen that the proposed method outperforms other methods in the literature on most test sequences. Especially in the case where the objects are occluded and then separated, such as the cars3 and people2, the multilayer strategy boosts the performance with a great jump on Fscore. This is because our method can accurately estimate separate motion models for foreground objects, instead of only one ambiguous model for the whole foreground, and separate appearance models comprehensively provide the evidence of probabilistic inference. The overall Fscore of our method is higher than the best score of the stateofthearts by a noticeable margin (83.66% vs 77.73%).
To evaluate the ability of the proposed method handling multiple objects, we categorize the suquences into three scenarios according to the number of moving objects in the scene, and report the average Fscore on each scenario in Table 2. In the single object scenario, our method outperform the second best method by a small margin (84.05% vs 80.70%). But as the number of objects increases, the margin grows (3.4% vs 8.0% vs 8.7%). It clearly demonstrates the outstanding capability of our method in complex scenes.
MLBS  GBSSP  FOF  OMCBS  GBS  BSFMC  

84.05  80.70  59.20  54.22  47.82  31.20  
91.14  81.51  83.19  71.48  80.30  62.75  
74.99  65.05  66.26  57.02  55.72  45.05 
Qualitative comparisons with three algorithms are illustrated in Figure 4. We pick two representative sequences (cars5 and people2), where at least two foreground objects appear in the scene, from Hopkins Dataset, and one sequence (forest) from CBD. It is notable that our MLBS algorithm separates moving objects and background accurately when the background scene is complex and the motions of objects are not simply in the same direction. For instance, in the sequence of people2, two women, who are walking in different directions, occluded and then separated, are separated precisely while other methods wrongly label the new appearing background region (e.g. the black rubbish bin) as foreground.
8.2 Multilabel Background Subtraction
We also evaluate the capability to separate different foreground objects. Since no one, to our best knowledge, has done such work before, we have designed a baseline to compare the performance. Details about baseline are shown in the Supplementary Material. Besides, [26] have proposed an offline approach (SLT) to tackle video segmentation problem based on longterm video analysis, and achieved a cutting edge performance. Since this method has no discrimination of foreground and background, we modify it by manually assigning the background label to the best fit segmented region, and make a comparison with it.
For quantitative comparison, we design a measurement by modifying the metric used in [26]. Let denote the foreground region in the groundtruth, the corresponding regions in the mask generated by algorithms, and the number of pixels inside the region. For each foreground region in the groundtruth, precision, recall and Fscore are defined as:
(12) 
Overall metric is obtained by averaging the measures of single regions. And the best onetoone assignment of generated regions to groundtruth regions is found by the Hungarian method [18]
. In the case where there exist generated regions without the assignment of groundtruth regions, we set the precision and recall of such regions to 1 and 0 respectively. It’s worth mentioning that our revised metric is calculated over only foreground regions to keep consistency with the binary background subtraction, as the measurement values are the same as that of the binary one when there is only one foreground object in the scene.
Precision  Recall  FScore  

First 10 Frames  ours  89.79  83.00  85.06 
SLT  90.70  78.94  83.41  
Baseline  61.23  62.32  61.35  
All Frames  ours  88.60  85.05  86.16 
SLT  89.51  78.24  82.00  
Baseline  63.63  66.22  64.35 
The performance evaluation is shown in Table 3. We have compared the performance of the approaches evaluated on both all frames and the first ten frames. It can be seen that the proposed method outperforms SLT on both sets and has a great jump from the baseline. It’s notable that the Fscore of our method on all frames is higher than that on first ten frames due to the nature of online methods. Unlike the offline methods that hold the knowledge of the whole sequence at the beginning of the sequence process, our online approach has little prior knowledge and requires the selfinitialization step during the first several frames, which surely leads to lower performance due to the insufficiency of prior knowledge. But it affects little as the sequence becomes longer.
Qualitative evaluations of our method and SLT are demonstrated in Figure 5. Our proposed method can separate the objects more accurately while SLT may falsely recognize two objects as only one when objects are very near and moving in the similar direction (see Block). Furthermore, our method could detect new objects immediately when they enter the scene (see Block). With the ability of accurate and robust foreground objects detection and segmentation, our method produces a proper number of foreground object regions, which is reflected by the higher recall in Table 3. Additionally, equipped with appearance models for each layer, our MLBS method is capable of dealing with the articulated motion (see Block) to a certain extent.
tclb  mtest  mdprop  grapcut  upmdl  Total 

1109  3675  144  1369  559  6856 
Table 4 shows the average computational time per frame. Our unoptimized Matlab implementation takes around 7 seconds per frame with an Intel XeonE5 CPU and 16GB menory. The computational time is dominated by motion estimation and graphcut. Motion estimation is done mainly by matrix inversion and multiplication. Since such operations can be readily parallelized on the GPU, we believe the realtime performance can be achieved by the optimation with the GPU and faster multithread implementation.
9 Conclusion
We propose a novel online multilayerbased framework for background subtraction with moving camera. In our framework, every foreground object and the background are assigned to an independent processing layer. A processing block is carefully designed to perform the posterior inference using Bayesian Filtering Framework, and Multilabel Graphcut is employed to produce the pixelwise segmentation for every video frame based on the normalized probability maps. Experiments show that our method performs favorably against other stateoftheart methods, with outstanding ability to segment multiple foreground objects.
References
 [1] D. Bickson. Gaussian belief propagation: Theory and aplication. CoRR, abs/0811.2518, 2008.
 [2] P. Bideau and E. LearnedMiller. It’s moving! a probabilistic model for causal motion segmentation in moving camera videos. arXiv preprint arXiv:1604.00136, 2016.
 [3] A. B. Chan and N. Vasconcelos. Modeling, clustering, and segmenting video with mixtures of dynamic textures. IEEE transactions on pattern analysis and machine intelligence, 30(5):909–926, 2008.
 [4] A. B. Chan and N. Vasconcelos. Layered dynamic textures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10):1862–1879, 2009.
 [5] X. Cui, J. Huang, S. Zhang, and D. N. Metaxas. Background subtraction using low rank and group sparsity constraints. In Proceedings of the 12th European Conference on Computer Vision  Volume Part I, pages 612–625, 2012.
 [6] A. Delong, A. Osokin, H. N. Isack, and Y. Boykov. Fast approximate energy minimization with label costs. International journal of computer vision, 96(1):1–27, 2012.
 [7] A. Elgammal. Background Subtraction: Theory and Practice. Morgan & Claypool Publishers, 2014.
 [8] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of the IEEE, 90(7):1151–1163, Jul 2002.
 [9] A. Elqursh and A. Elgammal. Online moving camera background subtraction. In European Conference on Computer Vision, pages 228–241. Springer, 2012.
 [10] A. Elqursh and A. Elgammal. Online motion segmentation using dynamic label propagation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2008–2015, 2013.
 [11] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, June 1981.

[12]
B. Han and L. S. Davis.
Densitybased multifeature background subtraction with support vector machine.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(5):1017–1023, May 2012.  [13] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004.
 [14] E. Hayman and J.O. Eklundh. Statistical background subtraction for a mobile observer. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 67–74. IEEE, 2003.
 [15] Y. Jin, L. Tao, H. Di, N. I. Rao, and G. Xu. Background modeling from a freemoving camera by multilayer homography algorithm. In 2008 15th IEEE International Conference on Image Processing, pages 1572–1575. IEEE, 2008.
 [16] K. Kim, D. Harwood, and L. S. Davis. Background updating for visual surveillance. In International Symposium on Visual Computing, pages 337–346. Springer, 2005.
 [17] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):147–159, Feb 2004.
 [18] H. W. Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(12):83–97, 1955.
 [19] M. P. Kumar, P. H. Torr, and A. Zisserman. Learning layered motion segmentations of video. International Journal of Computer Vision, 76(3):301–319, 2008.
 [20] S. Kwak, T. Lim, W. Nam, B. Han, and J. H. Han. Generalized background subtraction based on hybrid inference by belief propagation and bayesian filtering. In 2011 International Conference on Computer Vision, pages 2174–2181, 2011.
 [21] D.S. Lee. Effective gaussian mixture learning for video background subtraction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5):827–832, May 2005.
 [22] J. Lim and B. Han. Generalized background subtraction using superpixels with label integrated motion estimation. In European Conference on Computer Vision, pages 173–187. Springer, 2014.

[23]
A. Mittal and D. Huttenlocher.
Scene modeling for wide area surveillance and image synthesis.
In
Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on
, volume 2, pages 160–167 vol.2, 2000.  [24] A. Mumtaz, W. Zhang, and A. B. Chan. Joint motion segmentation and background estimation in dynamic scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 368–375, 2014.
 [25] M. Narayana, A. Hanson, and E. LearnedMiller. Coherent motion segmentation in moving camera videos using optical flow orientations. In Proceedings of the IEEE International Conference on Computer Vision, pages 1577–1584, 2013.
 [26] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(6):1187–1200, June 2014.
 [27] K. Patwardhan, G. Sapiro, and V. Morellas. Robust foreground detection in video using pixel layers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(4):746–751, 2008.
 [28] X. Ren, T. X. Han, and Z. He. Ensemble video object cut in highly dynamic scenes. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 1947–1954, June 2013.
 [29] Y. Ren, C.S. Chua, and Y.K. Ho. Statistical background modeling for nonstationary camera. Pattern Recognition Letters, 24(1):183–196, 2003.
 [30] S. Rowe and A. Blake. Statistical mosaics for tracking. Image and Vision Computing, 14(8):549–564, 1996.
 [31] Y. Sheikh, O. Javed, and T. Kanade. Background subtraction for freely moving cameras. In 2009 IEEE 12th International Conference on Computer Vision, pages 1219–1225, Sept 2009.
 [32] Y. Sheikh and M. Shah. Bayesian modeling of dynamic scenes for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(11):1778–1792, Nov 2005.
 [33] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888–905, 2000.
 [34] C. Stauffer and W. E. L. Grimson. Adaptive background mixture models for realtime tracking. In Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., volume 2. IEEE, 1999.
 [35] N. Sundaram, T. Brox, and K. Keutzer. Dense point trajectories by gpuaccelerated large displacement optical flow. In European conference on computer vision, pages 438–451. Springer, 2010.
 [36] P. Torr and A. Zisserman. Mlesac. Comput. Vis. Image Underst., 78(1):138–156, Apr. 2000.
 [37] P. H. Torr, R. Szeliski, and P. Anandan. An integrated bayesian approach to layer extraction from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):297–303, 2001.
 [38] R. Tron and R. Vidal. A benchmark for the comparison of 3d motion segmentation algorithms. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2007.
 [39] J. Y. Wang and E. H. Adelson. Representing moving images with layers. IEEE Transactions on Image Processing, 3(5):625–638, 1994.
 [40] J. Yan and M. Pollefeys. A general framework for motion segmentation: Independent, articulated, rigid, nonrigid, degenerate and nondegenerate. In European conference on computer vision, pages 94–106. Springer, 2006.
 [41] D. Zamalieva, A. Yilmaz, and J. W. Davis. A multitransformational model for background subtraction with moving cameras. In European Conference on Computer Vision, pages 803–817. Springer, 2014.
 [42] X. Zhu, Z. Ghahramani, and J. Lafferty. Semisupervised learning using gaussian fields and harmonic functions. In IN ICML, pages 912–919, 2003.
 [43] X. Zhu, Z. Ghahramani, J. Lafferty, et al. Semisupervised learning using gaussian fields and harmonic functions.
Comments
There are no comments yet.