Compressed Dynamic Mode Decomposition for Background Modeling

12/14/2015
by   N. Benjamin Erichson, et al.
University of St Andrews
0

We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling. The dynamic mode decomposition (DMD) is a regression technique that integrates two of the leading data analysis methods in use today: Fourier transforms and singular value decomposition. Borrowing ideas from compressed sensing and matrix sketching, cDMD eases the computational workload of high resolution video processing. The key principal of cDMD is to obtain the decomposition on a (small) compressed matrix representation of the video feed. Hence, the cDMD algorithm scales with the intrinsic rank of the matrix, rather then the size of the actual video (data) matrix. Selection of the optimal modes characterizing the background is formulated as a sparsity-constrained sparse coding problem. Our results show, that the quality of the resulting background model is competitive, quantified by the F-measure, Recall and Precision. A GPU (graphics processing unit) accelerated implementation is also presented which further boosts the computational efficiency of the algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 9

page 10

12/11/2015

Randomized Low-Rank Dynamic Mode Decomposition for Motion Detection

This paper introduces a fast algorithm for randomized computation of a l...
04/30/2014

Dynamic Mode Decomposition for Real-Time Background/Foreground Separation in Video

This paper introduces the method of dynamic mode decomposition (DMD) for...
12/28/2021

Quaternion-based dynamic mode decomposition for background modeling in color videos

Scene Background Initialization (SBI) is one of the challenging problems...
05/16/2019

Reduced-order modeling using Dynamic Mode Decomposition and Least Angle Regression

Dynamic Mode Decomposition (DMD) yields a linear, approximate model of a...
01/11/2022

Sketching Methods for Dynamic Mode Decomposition in Spherical Shallow Water Equations

Dynamic mode decomposition (DMD) is an emerging methodology that has rec...
12/11/2020

Towards an Adaptive Dynamic Mode Decomposition

Dynamic Mode Decomposition (DMD) is a data based modeling tool that iden...
01/19/2022

Compressed Smooth Sparse Decomposition

Image-based anomaly detection systems are of vital importance in various...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the fundamental computer vision objectives is to detect moving objects in a given video stream. At the most basic level, moving objects can be found in a video by removing the background. However, this is a challenging task in practice, since the true background is often unknown. Algorithms for background modeling are required to be both robust and adaptive. Indeed, the list of challenges is significant and includes camera jitter, illumination changes, shadows and dynamic backgrounds. There is no single method currently available that is capable of handling all the challenges in real-time without suffering performance failures. Moreover, one of the great challenges in this field is to efficiently process high-resolution video streams, a task that is at the edge of performance limits for state-of-the-art algorithms. Given the importance of background modeling, a variety of mathematical methods and algorithms have been developed over the past decade. Comprehensive overviews of traditional and state-of-the art methods are provided by Bouwmans

bouwmans2014traditional or Sobral and Vacavant Sobralreview .

Motivation.

This work advocates the method of dynamic mode decomposition (DMD), which enables the decomposition of spatio-temporal grid data in both space and time. The DMD has been successfully applied to videos grosek2014 ; erichson2015 ; mrDMDbg , however the computational costs are dominated by the singular value decomposition (SVD). Even with the aid of recent innovations around randomized algorithms for computing the SVD halko2011rand , the computational costs remain expensive for high resolution videos. Importantly, we build on the recently introduced compressed dynamic mode decomposition (cDMD) algorithm, which integrates DMD with ideas from compressed sensing and matrix sketching cdmd

. Hence, instead of computing the DMD on the full-resolution video data, we show that an accurate decomposition can be obtained from a compressed representation of the video in a fraction of the time. The optimal mode selection for background modeling is formulated as a sparsity-constrained sparse coding problem, which can be efficiently approximated using the greedy orthogonal matching pursuit method. The performance gains in computation time are significant, even competitive with Gaussian mixture-models. Moreover, the performance evaluation on real-videos shows that the detection accuracy is competitive compared to leading robust principal component analysis (RPCA) algorithms.

Organization.

The rest of this paper is organized as follows. Section 2 presents a brief introduction to the dynamic mode decomposition and its application to video and background modeling. Section 3 presents the compressed DMD algorithm and different measurement matrices to construct the compressed video matrix. A GPU accelerated implementation is also outlined. Finally a detailed evaluation of the algorithm is presented in section 4. Concluding remarks and further research directions are given in section 5. Appendix A gives an overview of notation.

2 DMD for Video Processing

2.1 The Dynamic Mode Decomposition

The dynamic mode decomposition is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in nonlinear dynamical systems, or short-time future estimates of such systems. DMD was originally introduced in the fluid mechanics community by Schmid 

DMD1 and Rowley et al. DMD4

. A surveillance video sequence offers an appropriate application for DMD because the frames of the video are, by nature, equally spaced in time, and the pixel data, collected in every snapshot, can readily be vectorized. The dynamic mode decomposition is illustrated for videos in Figure 

1. For computational convenience the flattened grayscale video frames (snapshots) of a given video stream are stored, ordered in time, as column vectors of a matrix. Hence, we obtain a 2-dimensional spatio-temporal grid, where denotes the number of pixels per frame, is the number of video frames taken, and the matrix elements correspond to a pixel intensity in space and time. The video frames can be thought of as snapshots of some underlying dynamics. Each video frame (snapshot) at time is assumed to be connected to the previous frame by a linear map .

Figure 1: Illustration of the dynamic mode decomposition for video applications. Given a video stream, the first step involves reshaping the grayscale video frames into a 2-dimensional spatio-temporal grid. The DMD then creates a decomposition in space and time in which DMD modes contain spatial structure.

Mathematically, the linear map is a time-independent operator which constructs the approximate linear evolution

(1)

The objective of dynamic mode decomposition is to find an estimate for the matrix

and its eigenvalue decomposition that characterize the system dynamics. At its core, dynamic mode decomposition is a regression algorithm. First, the spatio-temporal grid is separated into two overlapping sets of data, called the left and right snapshot sequences

(2)

Equation (1) is reformulated in matrix notation

(3)

In order to find an estimate for the matrix we face the following least-squares problem

(4)

where denotes the Frobenius norm. This is a well-studied problem, and an estimate of the linear operator is given by

(5)

where denotes the Moore-Penrose pseudoinverse, which produces a regression that is optimal in a least-square sense. The DMD modes

, containing the spatial information, are then obtained as eigenvectors of the matrix

(6)

where columns of are eigenvectors and is a diagonal matrix containing the corresponding eigenvalues . In practice, when the dimension is large, the matrix may be intractable to estimate and to analyze directly. DMD circumvents the computation of by considering a rank-reduced representation . This is achieved by using the similarity transform, i.e., projecting on the left singular vectors. Moreover, the DMD typically makes use of low-rank structure so that the total number of modes, , allows for dimensionality reduction of the video stream. Hence, only the relatively small matrix needs to be estimated and analyzed (see Section 3 for more details). The dynamic mode decomposition yields then the following low-rank factorization of a given spatio-temporal grid (video stream):

(7)

where the diagonal matrix has the amplitudes as entries and is the Vandermonde matrix describing the temporal evolution of the DMD modes .

2.2 DMD for Foreground/Background Separation

The DMD method can attempt to reconstruct any given frame, or even possibly future frames. The validity of the reconstruction thereby depends on how well the specific video sequence meets the assumptions and criteria of the DMD method. Specifically, a video frame at time points is approximately reconstructed as follows

(8)

Notice that the DMD mode is a vector containing the spatial structure of the decomposition, while the eigenvalue describes the temporal evolution. The scalar is the amplitude of the corresponding DMD mode. At time , equation (8) reduces to . Since the amplitude is time-independent, can be obtained by solving the following least-square problem using the video frame as initial condition

(9)

It becomes apparent that any portion of the first video frame that does not change in time, or changes very slowly in time, must have an associated continuous-time eigenvalue

(10)

that is located near the origin in complex space: or equivalent .

(a) Sample frames () of video sequence.
(b) Dominant continuous-time eigenvalues .
(c) Amplitudes over time.
Figure 2:

Results of the dynamic mode decomposition for the ChangeDetection.net video sequence ‘canoe’. Subplot (a) shows three samples frames of the video sequence. Subplot (b) and (c) show the the continuous-time eigenvalues and the temporal evolution of the amplitudes. The modes corresponding to the amplitudes with the highest variance are capturing the dominant foreground object (canoe), while the zero mode is capturing the dominant structure of the background. Modes corresponding to high frequency amplitudes capturing other dynamics in the video sequence, e.g., waves, etc.

This fact becomes the key principle to separate foreground elements (approximate sparse) from background (approximate low-rank) information. Figure 2 shows the dominant continuous-time eigenvalues for a video sequence. Subplot (a) shows three sample frames from this video sequence that includes a canoe. Here the foreground object (canoe) is not present at the beginning and the end for the video sequence. The dynamic mode decomposition factorizes this sequence into modes describing the different dynamics present. The analysis of the continuous-time eigenvalue and the amplitudes over time (the amplitudes multiplied by the Vandermonde matrix) can provide interesting insights, shown in subplot (b) and (c). First, the amplitude for the prominent zero mode (background) is constant over time, indicating that this mode is capturing the dominant (static) content of the video sequence, i.e, the background. The next pair of modes correspond to the canoe, a foreground object slowly moving over time. The amplitude reveals the presence of this object. Specifically, the amplitude reaches its maximum at about the frame index 150, when the canoe is in the center of the video frame. At the beginning and end of the video the canoe is not present, indicated by the negative values of the amplitude. The subsequent modes describe other dynamics in the video sequence e.g., the movements of the canoeist and the waves. For instance, the modes describing the waves have high frequency and small amplitudes (not shown here). Hence, a theoretical viewpoint we will build upon with the DMD methodology centers around the recent idea of low-rank and sparse matrix decompositions. Following this approach, background modeling can be formulated as a matrix separation problem into low-rank (background) and sparse (foreground) components. This viewpoint has been advocated, for instance, by Candès et al. RPCA1 in the framework of robust principal component analysis (RPCA). For a thorough discussion of such methods used for background modeling, we refer to Bouwmans et al. bouwmans2014robust ; bouwmans2015decomp . The connection between DMD and RPCA was first established by Grosek and Kutz grosek2014 . Assume the set of background modes satisfies . The DMD expansion of equation (8) then yields

(11)

where is a time vector and .111Note that by construction is complex, while pixel intensities of the original video stream are real-valued. Hence, only the the real part is considered in the following. Specifically, DMD provides a matrix decomposition of the form , where the low-rank matrix will render the video of just the background, and the sparse matrix will render the complementary video of the moving foreground objects. We can interpret these DMD results as follows: stationary background objects translate into highly correlated pixel regions from one frame to the next, which suggests a low-rank structure within the video data. Thus the DMD algorithm can be thought of as an RPCA method. The advantage of the DMD method and its sparse/low-rank separation is the computational efficiency of achieving (11), especially when compared to the optimization methods of RPCA. The analysis of the time evolving amplitudes provide interesting opportunities. Specifically, learning the amplitudes’ profiles for different foreground objects allows automatic separation of video feeds into different components. For instance, it could be of interest to discriminate between cars and pedestrians in a given video sequence.

2.3 DMD for Real-Time Background Modeling

When dealing with high-resolution videos, the standard DMD approach is expensive in terms of computational time and memory, because the whole video sequence is reconstructed. Instead a ‘good’ static background model is often sufficient for background subtraction. This is because background dynamics can be filtered out or thresholded. The challenge remains to automatically select the modes best describing the background. This is essentially a bias-variance trade-off. Using just the zero mode (background) leads to an under-fit background model, while a large set of modes tend to overfit. Motivated, by the sparsity-promoting variant of the standard DMD algorithm introduced by Jovanović et al. jovanovic2014sparsity , we formulate a sparsity-constrained sparse coding problem for mode selection. The idea is to augment equation (9) by an additional term that penalizes the number of non-zero elements in the vector

(12)

where is the sparse representation of , and is the pseudo norm which counts the non-zero elements in . Solving this sparsity problem exactly is NP-hard. However, the problem in Eq. 12 can be efficiently solved using greedy approximation methods. Specifically, we utilize orthogonal matching pursuit (OMP) mallat1993matching ; tropp2007signal . A highly computationally efficient algorithm is proposed by Rubinstein et al. rubinstein2008efficient as implemented in the scikit-learn software package scikit-learn . The greedy OMP algorithm works iteratively, selecting at each step the mode with the highest correlation to the current residual. Once a mode is selected the initial condition is orthogonally projected on the span of the previously selected set of modes. Then the residual is recomputed and the process is repeated until non-zero entries are obtained. If no priors are available, the optimal number of modes can be determined using cross-validation. Finally, the background model is computed as

(13)

3 Compressed DMD (cDMD)

Compressed DMD provides a computationally efficient framework to compute the dynamic mode decomposition on massively under-sampled or compressed data cdmd . The method was originally devised to reconstruct high-dimensional, full-resolution DMD modes from sparse, spatially under-resolved measurements by leveraging compressed sensing. However, it was quickly realized that if full-state measurements are available, many of the computationally expensive steps in DMD may be computed on a compressed representation of the data, providing dramatic computational savings. The first approach, where DMD is computed on sparse measurements without access to full data, is referred to as compressed sensing DMD. The second approach, where DMD is accelerated using a combination of calculations on compressed data and full data, is referred to as compressed DMD (cDMD); this is depicted schematically in Fig. 3. For the applications explored in this work, we use compressed DMD, since full image data is available and reducing algorithm run-time is critical for real-time performance.

DMD

cDMD

Eq. (24)

Data

Dynamic Modes

Full

Compressed

Figure 3: Schematic of the compressed dynamic mode decomposition architecture. The data (video stream) is first compressed via left multiplication by a measurement matrix . DMD is then performed on the compressed representation of the data. Finally, the full DMD modes are reconstructed from the compressed modes by the expression in Eq. (24).

3.1 Compressed Sensing and Matrix Sketching

Compression algorithms are at the core of modern video, image and audio processing software such as MPEG, JPEG and MP3. In our mathematical infrastructure of compressed DMD, we consider the theory of compressed sensing and matrix sketching.

Compressed sensing

demonstrates that instead of measuring the high-dimensional signal, or pixel space representation of a single frame , we can measure instead a low-dimensional subsample and approximate/reconstruct the full state space with this significantly smaller measurement Donoho:2006 ; Candes2008 ; Baraniuk:2007 . Specifically, compressed sensing assumes the data being measured is compressible in some basis, which is certainly the case for video. Thus the video can be represented in a small number of elements of that basis, i.e. we only need to solve for the few non-zero coefficients in the transform basis. For instance, consider the measurements , with :

(14)

If is sparse in , then we may solve the underdetermined system of equations

(15)

for and then reconstruct . Since there are infinitely many solutions to this system of equations, we seek the sparsest solution . However, it is well known from the compressed sensing literature that solving for the sparsest solution formally involves an optimization that is NP-hard. The success of compressed sensing is that it ultimately engineered a solution around this issue by showing that one can instead, under certain conditions on the measurement matrix , trade the infeasible optimization for a convex -minimization Donoho:2006 :

(16)

Thus the -norm acts as a proxy for sparsity promoting solutions of . To guarantee that the compressed sensing architecture will almost certainly work in a probabilistic sense, the measurement matrix and sparse basis must be incoherent, meaning that the rows of are uncorrelated with the columns of . This is discussed in more detail in cdmd . Given that we are considering video frames, it is easy to suggest the use of generic basis functions such as Fourier or wavelets in order to represent the sparse signal . Indeed, wavelets are already the standard for image compression architectures such as JPEG-2000. As for the Fourier transform basis, it is particularly attractive for many engineering purposes since single-pixel measurements are clearly incoherent given that it excites broadband frequency content.

Matrix sketching

is another prominent framework in order to obtain a similar compressed representation of a massive data matrix liberty2013simple ; SketchingNLA . The advantage of this approach are the less restrictive assumptions and the straight forward generalization from vectors to matrices. Hence, Eq. 14 can be reformulated in matrix notation

(17)

where again denotes a suitable measurement matrix. Matrix sketching comes with interesting error bounds and is applicable whenever the data matrix has low-rank structure. For instance, it has been successfully demonstrated that the singular values and right singular vectors can be approximated from such a compressed matrix representation Gilbert2012 .

3.2 Algorithm

The compressed DMD algorithm proceeds similarly to the standard DMD algorithm tu2013dynamic at nearly every step until the computation of the DMD modes. The key difference is that we first compute a compressed representation of the video sequence, as illustrated in Figure 4.

Figure 4: Video compression using a sparse measurement matrix. The compressed matrix faithfully captures the essential spectral information of the video.

Hence the algorithm starts by generating the measurement matrix in order to compresses or sketch the data matrices as in Eq. (2):

(18)

where is denoting the number of samples or measurements. There is a fundamental assumption that the input data are low-rank. This is satisfied for video data, because each of the columns of and are sparse in some transform basis . Thus, for sufficiently many incoherent measurements, the compressed matrices and have similar correlation structures to their high-dimensional counterparts. Then, compressed DMD approximates the eigenvalues and eigenvectors of the linear map , where the estimator is defined as:

(19a)
(19b)

where denotes the conjugate transpose. The pseudo-inverse is computed using the SVD:

(20)

where the matrices , and are the truncated left and right singular vectors. The diagonal matrix has the corresponding singular values as entries. Here is the target-rank of the truncated SVD approximation to . Note that the subscript is included to explicitly denote computations involving the compressed data . As in the standard DMD algorithm, we typically do not compute the large matrix , but instead compute the low-dimensional model projected onto the left singular vectors:

(21a)
(21b)

Since this is a similarity transform, the eigenvectors and eigenvalues can be obtained from the eigendecomposition of

(22)

where columns of are eigenvectors and is a diagonal matrix containing the corresponding eigenvalues . The similarity transform implies that . The compressed DMD modes are consequently given by

(23)

Finally, the full DMD modes are recovered using

(24)

Note that the compressed DMD modes in Eq. (24) make use of the full data

as well as the linear transformations obtained using the compressed data

and . The expensive SVD on is bypassed, and it is instead performed on . Depending on the compression ratio, this may provide significant computational savings.

function (1) Left/right snapshot sequence. (2) Draw sensing matrix. (3) Compress input matrix. (4) Truncated SVD. (6) Least squares fit. (7) Eigenvalue decomposition. (8) Compute full-state modes . (9) Compute amplitudes using as intial condition. (10) Vandermonde matrix (optional).
Algorithm 1 Compressed Dynamic Mode Decomposition. Given a matrix containing the flattened video frames, this procedure computes the approximate dynamic mode decomposition, where are the DMD modes, are the amplitudes, and is the Vandermonde matrix describing the temporal evolution. The procedure can be controlled by the two parameters and , the target rank and the number of samples respectively. It is required that , integer and and .

The computational steps are summarized in Algorithm 1 and further numerical details are presented in cdmd .

Remark 1

The computational performance heavily depends on the measurement matrix used to construct the compressed matrix, as described in the next section. For a practical implementation sparse or single pixel measurements (random row sampling) are favored. The latter most memory efficient methods avoids the generation of a large number of random numbers and the expensive matrix-matrix multiplication in step 3.

Remark 2

One alternative to the predefined target-rank is the recent hard-thresholding algorithm of Gavish and Donoho gavish . This method can can be combined with step 4 to automatically determine the optimal target-rank.

Remark 3

As described in Section 2.3 step 9 can be replaced by the orthogonal matching pursuit algorithm, in order to obtain a sparsity-constrained solution: . Computing the OMP solution is in general extremely fast, but if it comes to high resolution video streams this step can become computationally expensive. However, instead of computing the amplitudes based on the the full-state dynamic modes the compressed DMD modes can be used. Hence, Eq. 12 can be reformulated as

(25)

where is the first compressed video frame. Then step 9 can be replaced by: .

3.3 Measurement Matrices

A basic sensing matrix can be constructed by drawing

independent random samples from a Gaussian, Uniform or a sub Gaussian, e.g., Bernoulli distribution. It can be shown that these measurement matrices have optimal theoretical properties, however for practical large-scale applications they are often not feasible. This is because generating a large number of random numbers can be expensive and computing (

18) using unstructured dense matrices has a time complexity of . From a computational perspective it is favorable to build a structured random sensing matrix which is memory efficient, and enables the execution of fast matrix-matrix multiplications. For instance, Woolfe et al. woolfe2008fast showed that the costs can be reduced to using a subsampled random Fourier transform (SRFT) sensing matrix

(26)

where draws

random rows (without replacement) from the identity matrix

. is the unnormalized discrete Fourier transform with the following entries and

is a diagonal matrix with independent random diagonal elements uniformly distributed on the complex unit circle. While the SRFT sensing matrix has nice theoretical properties, the improvement from

to is not necessarily significant. In practice it is often sufficient to construct even simpler sensing matrices. An interesting approach making the matrix-matrix multiplication (18) redundant is to use single-pixel measurements (random row-sampling)

(27)

In a practical implementation this allows construction of the compressed matrix from choosing random rows without replacement from . Hence, only random numbers need to be generated and no memory is required for storing a sensing matrix . A different approach is the method of sparse random projections achlioptas2003database . The idea is to construct a sensing matrix with identical independent distributed entries as follows

(28)

where the parameter controls the sparsity. While Achlioptas achlioptas2003database has proposed the values , Li et al. li2006very showed that also very sparse (aggressive) sampling rates like achieve accurate results. Modern sparse matrix packages allow rapid execution of (18).

3.4 GPU Accelerated Implementation

While most current desktop computers allow multithreading and also multiprocessing, using a graphics processing unit (GPU) enables massive parallel processing. The paradigm of parallel computing becomes more important as larger amounts of data stagnate CPU clock speeds. The architecture of a modern CPU and GPU is illustrated in Figure 6. The key difference between these architectures is that the CPU consists of few arithmetic logic units (ALU) and is highly optimized for low-latency access to cached data sets, while the GPU is optimized for data-parallel, throughput computations. This is achieved by the large number of small arithmetic logic units (ALU).

(a) CPU
(b) GPU
Figure 5: Illustration of the CPU and GPU architecture.
Figure 6: Illustration of the data parallelism in matrix-matrix multiplications.
Figure 5: Illustration of the CPU and GPU architecture.

Traditionally this architecture was designed for the real-time creation of high-definition 2D/3D graphics. However, NVIDIA’s programming model for parallel computing CUDA opens up the GPU as a general parallel computing device CUDA . Using high-performance linear algebra libraries, e.g. CULA CULA , can help to accelerate comparable CPU implementations substantially. Take for instance the matrix multiplication of two square matrices, illustrated in Figure 6. The computation involves the evaluation of dot products.222Modern efficient matrix-matrix multiplications are based on block matrix decomposition or other computational tricks, and do not actually compute dot products. However the concept of parallelism remains the same. The data parallelism therein is that each dot-product can be computed independently. With enough ALUs the computational time can be substantially accelerated. This parallelism applies readily to the generation of random numbers and many other linear algebra routines.

Relatively few GPU accelerated background subtraction methods have been proposed carr2008gpu ; pham2010gpu ; Qin20151 . The authors achieve considerable speedups compared to the corresponding CPU implementations. However, the proposed methods barely exceed frames per second for high definition videos. This is mainly due to the fact that many statistical methods do not fully benefit from the GPU architecture. In contrast, linear algebra based methods can substantially benefit from parallel computing. An analysis of Algorithm 1 reveals that generating random numbers in line 2 and the dot products in lines 3, 6, and 8 are particularly suitable for parallel processing. But also the computation of the deterministic SVD, the eigenvalue decomposition and the least-square solver can benefit from the GPU architecture. Overall the GPU accelerated DMD implementation is substantially faster than the MKL (Intel Math Kernel Library) accelerated routine. The disadvantage of current GPUs is the rather limited bandwidth, i.e., the amount of data which can be exchanged per unit of time, between CPU and GPU memory. However, this overhead can be mitigated using asynchronous memory operations.

4 Results

In this section we evaluate the computational performance and the suitability of compressed DMD for object detection. To evaluate the detection performance, a foreground mask is computed by thresholding the difference between the true frame and the reconstructed background. A standard method is to use the Euclidean distance, leading to the following binary classification problem

(29)

where denotes the -th pixel of the -th video frame and

denotes the corresponding pixel of the modeled background. Pixels belonging to foreground objects are set to 1 and 0 otherwise. Access to the true foreground mask allows the computation of several statistical measures. For instance, common evaluation measures in the background subtraction literature are recall, precision and the F-measure. While recall measures the ability to correctly detect pixels belonging to moving objects, precision measures how many predicted foreground pixels are actually correct, i.e., false alarm rate. The F-measure combines both measures by their harmonic mean. A workstation (Intel Xeon CPU E5-2620 2.4GHz, 32GB DDR3 memory and NVIDIA GeForce GTX 970) was used for all following computations.

4.1 Evaluation on Real Videos

We have evaluated the performance of compressed DMD for object detection using the CD (ChangeDetection.net) and BMC (Background Models Challenge) benchmark dataset wang2014cdnet ; vacavant2013benchmark . Figure 7 illustrates the real videos of the latter dataset, posing many common challenges faced in outdoor video surveillance scenarios.

(a) (001) Boring parking
(b) (002) Big trucks
(c) (003) Wandering students
(d) (004) Rabbit in the night
(e) (005) Snowy Christmas
(f) (006) Beware of the trains
(g) (007) Train in the tunnel
(h) (008) Traffic during windy day’
(i) (009) One rainy hour
Figure 7: BMC dataset: Example frames of the real videos.

Mainly, the following complex situations are encountered:

  • Illumination changes: Gradual illumination changes caused by fog or sun.

  • Low illumination: Bad light conditions, e.g., night videos.

  • Bad weather: Introduced noise (small objects) by weather conditions, e.g., snow or rain.

  • Dynamic backgrounds: Moving objects belonging to the background, e.g. waving trees or clouds.

  • Sleeping foreground objects: Former foreground objects that becoming motionless and moving again at a later point in time.

(a) Highway
(b) Blizzard
(c) Canoe
(d) Fountain02
(e) Park
(f) Library
Figure 8: The F-measure for varying thresholds is indicating the dominant background modeling performance of the sparsity-promoting compressed DMD algorithm. In particular, the performance gain (over using the zero mode only) is substantial for the dynamic background scenes ‘Canoe’ and ‘Fountain02’.

Evaluation settings.

In order to obtain reproducible results the following settings have been used. For a given video sequence, the low-rank dynamic mode decomposition is computed using a very sparse measurement matrix with a sparsity factor and measurements. While, we use here a fixed number of samples, the choice can be guided by the formula . The target-rank is automatically determined via the optimal hard-threshold for singular values gavish . Once the dynamic mode decomposition is obtained, the optimal set of modes is selected using the orthogonal matching pursuit method. In general the use of non-zero entries achieves good results. Instead of using a predefined value for , cross-validation can be used to determine the optimal number of non-zero entries. Further, the dynamic mode decomposition as presented here is formulated as a batch algorithm, in which a given long video sequence is split into batches of consecutive frames. The decomposition is then computed for each batch independently.

The CD dataset.

First, six CD video sequences are used to contextualize the background modeling quality using the sparse-coding approach. This is compared to using the zero (static background) mode only. Figure 8 shows the evaluation results of one batch by plotting the F-measure against the threshold for background classification. In fife out of the six examples the sparse-coding approach (cDMD k=opt) dominates. In particular, significant improvements are achieved for the dynamic background video sequences ‘Canoe’ and ‘Fountain02’. Only in case of the ‘Park’ video sequence the method tends to over-fit. Interestingly, the performance of the compressed algorithm is slightly better then the exact DMD algorithm, overall. This is due to the implicit regularization of randomized algorithms Mahoney2011 ; rSVDR .

The BMC dataset.

In order to compare the cDMD algorithm with other RPCA algorithms the BMC dataset has been used. Table 1 shows the evaluation results computed with the BMC wizard for all videos. An individual threshold value has been selected for each video to compute the foreground mask. For comparison the evaluation results of other RPCA methods are shown bouwmans2015decomp . Overall cDMD achieves an average F-value of about . This is slightly better then the performance of GoDec Zhou11godec and nearly as good as LSADM Goldfarb . However, it is lower then the F-measure achieved with the RSL method RPCA2 . Figure 9 presents visual results for example frames across 5 videos. The last row shows the smoothed (median filtered) foreground mask.

Figure 9: Visual evaluation results for 5 example frames corresponding to the BMC Videos: 002, 003, 006, 007 and 009. The top row shows the original grayscale images (moving objects are highlighted). The second row shows the differencing between the reconstructed cDMD background and the original frame. Row three shows the thresholded and row four the in addition median filtered foreground mask.

Discussion.

The results reveal some of the strengths and limitations of the compressed DMD algorithm. First, because cDMD is presented here as a batch algorithm, detecting sleeping foreground objects as they occur in video 001 is difficult. Another weakness is the limited capability of dealing with non-periodic dynamic backgrounds, e.g., big waving trees and moving clouds as occurring in the videos 001, 005, 008 and 009. On the other hand good results are achieved for the videos 002, 003, 004 and 007, showing that DMD can deal with large moving objects and low illumination conditions. The integration of compressed DMD into a video system can overcome some of these initial issues. Hence, instead of discarding the previous modeled background frames, a background maintenance framework can be used to incrementally update the model. In particular, this allows to deal better with sleeping foreground objects. Further, simple post-processing techniques (e.g. median filter or morphology transformations) can substantially reduce the false positive rate.

4.2 Computational Performance

Figure 12 shows the average frames per seconds (fps) rate required to obtain the foreground mask for varying video resolutions. The results illustrate the substantial computational advantage of the cDMD algorithm over the standard DMD. The computational savings are mainly achieved by avoiding the expensive computation of the singular value decomposition. Specifically, the compression step reduces the time complexity from to . The computation of the full modes in Eq. 24 remain the only computational expensive step of the algorithm. However, this step is embarrassingly parallel and the computational time can be further reduced using a GPU accelerated implementation. The decomposition of a HD videos feed using the GPU accelerated implementation achieves a speedup of about and compared to the corresponding CPU cDMD and (exact) DMD implementations. The speedup of the GPU implementation can even further be increased using sparse or single pixel (sPixel) measurement matrices.

Figure 10 investigates the performance of the different measurement matrices in more detail. Therefor, the fps rate and the F-measure is plotted for a varying number of samples . Gaussian measurements achieves the best accuracy in terms of the F-measure, but the computational costs become increasingly expensive. Single pixel measurements (sPixel) is the most computationally efficient method. The primary advantages of single pixel measurements are the memory efficiency and the simple implementation. Sparse sensing matrices offer the best trade-off between computational time and accuracy, but require access to sparse matrix packages.

It is important to stress that randomized sensing matrices cause random fluctuations influencing the background model quality, illustrated in Figure 11

. The bootstrap confidence intervals show that sparse measurements have lower dispersion than single pixel measurements. This is, because single pixel measurements discard more information than sparse and Gaussian sensing matrices.

Figure 10: Algorithms runtime (excluding computation of the foreground mask) and accuracy for a varying number of samples . Here a video sequence with 200 frames is used.
Figure 11: Bootstrap -confidence intervals of the F-measure computed using both sparse and single pixel measurements.
Measure BMC real videos Average
001 002 003 004 005 006 007 008 009

RSL

De La Torre et al. RPCA2

Recall 0.800 0.689 0.840 0.872 0.861 0.823 0.658 0.589 0.690 -
Precision 0.732 0.808 0.804 0.585 0.598 0.713 0.636 0.526 0.625 -
F-Measure 0.765 0.744 0.821 0.700 0.706 0.764 0.647 0.556 0.656 0.707

LSADM

Goldfarb et al. Goldfarb

Recall 0.693 0.535 0.784 0.721 0.643 0.656 0.449 0.621 0.701 -
Precision 0.511 0.724 0.802 0.729 0.475 0.655 0.693 0.633 0.809 -
F-Measure 0.591 0.618 0.793 0.725 0.549 0.656 0.551 0.627 0.752 0.650

GoDec

Zhou and Tao Zhou11godec

Recall 0.684 0.552 0.761 0.709 0.621 0.670 0.465 0.598 0.700 -
Precision 0.444 0.682 0.808 0.728 0.462 0.636 0.626 0.601 0.747 -
F-Measure 0.544 0.611 0.784 0.718 0.533 0.653 0.536 0.600 0.723 0.632

cDMD

Recall 0.552 0.697 0.778 0.693 0.611 0.700 0.720 0.515 0.566 -
Precision 0.581 0.675 0.773 0.770 0.541 0.602 0.823 0.510 0.574 -
F-Measure 0.566 0.686 0.776 0.730 0.574 0.647 0.768 0.512 0.570 0.648
Table 1: Evaluation results of nine real videos from the BMC dataset. For comparison, the results of three other leading robust PCA algorihtms are presented, adapted from bouwmans2015decomp .
Figure 12: CPU and GPU algorithms runtime (including the computation of the foreground mask) for varying video resolutions (200 frames). The optimal target rank is automatically determined and samples are used.

5 Conclusion and Outlook

We have introduced the compressed dynamic mode decomposition as a novel algorithm for video background modeling. Although many techniques have been developed in the last decade and a half to accomplish this task, significant challenges remain for the computer vision community when fast processing of high-definition video is required. Indeed, real-time HD video analysis remains one of the grand challenges of the field. Our cDMD method provides compelling evidence that it is a viable candidate for meeting this grand challenge, even on standard CPU computing platforms. The frame rate per second is highly competitive compared to other stat-of-the-art algorithms, e.g. Gaussian mixture-based algorithms. Compared to current robust principal component analysis based algorithm the increase in speed is even more substantial. In particular, the GPU accelerated implementation substantially improves the computational time.

Despite the significant computational savings, the cDMD remains competitive with other leading algorithms in the quality of the decomposition itself. Our results show, that for both standard and challenging environments, the cDMD’s object detection accuracy in terms of the F-measure is competitive to leading RPCA based algorithms bouwmans2015decomp . Though, the algorithm cannot compete, in terms of the F-measure, with highly specialized algorithms, e.g. optimized Gaussian mixture-based algorithms for background modeling Sobralreview . The main difficulties arise when video feeds are heavily crowded or dominated by non-periodic dynamic background objects. Overall, the trade-off between speed and accuracy of compressed DMD is compelling.

Future work will aim to improve the background subtraction quality as well as to integrate a number of innovative techniques. One technique that is particularly useful for object tracking is the multi-resolution DMD kutzMRDMD . This algorithm has been shown to be a potential method for target tracking applications. Thus one can envision the integration of multi-resolution ideas with cDMD, i.e. a multi-resolution compressed DMD method, in order to separate the foreground video into different dynamic targets when necessary.

Acknowledgements.
We would like to express our gratitude to E. R. Davies, K. Manohar and the three anonymous reviewers for many helpful comments on an earlier version of this paper. JNK acknowledges support from Air Force Office of Scientific Research (FA9500-15-C-0039). SLB acknowledges support from the Department of Energy under award DE-EE0006785. NBE acknowledges support from the UK Engineering and Physical Sciences Research Council (EP/L505079/1).

Appendix A Notation

Scalars
Number of modes (target-rank)
Number of samples (measurements)
Number of sparse samples
Number of non-zero amplitudes
Number of pixels per video frame
Number of video frames
Eigenvalue
Continuous-time eigenvalue
Vectors
Flattened video frame
Compressed video frame
DMD mode
Amplitudes
Sparsity-constrained amplitudes
Matrices
Left and right snapshot sequence
Compressed left/right snapshot sequence
Measurement matrix
Linear map
Rank-reduced linear map
DMD modes
Compressed DMD modes
Rank-reduced eigenvectors
Rank-reduced eigenvalues (diagonal matrix)
Amplitudes (diagonal matrix)
Vandermonde matrix
Truncated compressed left singular vectors
Truncated compressed right singular vectors
Truncated compressed singular values

References

  • (1) T. Bouwmans, Traditional and recent approaches in background modeling for foreground detection: An overview, Computer Science Review 11-12 (2014) 31–66. doi:10.1016/j.cosrev.2014.04.001.
  • (2) A. Sobral, A. Vacavant, A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos, Computer Vision and Image Understanding 122 (2014) 4–21. doi:10.1016/j.cviu.2013.12.005.
  • (3) J. Grosek, J. N. Kutz, Dynamic mode decomposition for real-time background/foreground separation in video (2014). arXiv:1404.7592.
  • (4) N. B. Erichson, C. Donovan, Randomized low-rank dynamic mode decomposition for motion detection, Computer Vision and Image Understanding 146 (2016) 40–50. doi:10.1016/j.cviu.2016.02.005.
  • (5) J. N. Kutz, X. Fu, S. L. Brunton, N. B. Erichson, Multi-resolution dynamic mode decomposition for foreground/background separation and object tracking, in: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), 2015, pp. 921–929. doi:10.1109/ICCVW.2015.122.
  • (6) N. Halko, P. G. Martinsson, J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review 53 (2) (2011) 217–288. doi:10.1137/090771806.
  • (7) S. L. Brunton, J. L. Proctor, J. H. Tu, J. N. Kutz, Compressed sensing and dynamic mode decomposition, Journal of Computational Dynamics 2 (2) (2015) 165–191. doi:10.3934/jcd.2015002.
  • (8) P. Schmid, Dynamic mode decomposition of numerical and experimental data, Journal of Fluid Mechanics 656 (2010) 5–28. doi:10.1017/S0022112010001217.
  • (9) C. Rowley, I. Mezić, S. Bagheri, P. Schlatter, D. Henningson, Spectral analysis of nonlinear flows, Journal of Fluid Mechanics 641 (2009) 115–127.
  • (10) E. J. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis?, Journal of the ACM 58 (3) (2011) 1–37. doi:10.1145/1970392.1970395.
  • (11) T. Bouwmans, E. H. Zahzah, Robust PCA via principal component pursuit: A review for a comparative evaluation in video surveillance, Computer Vision and Image Understanding 122 (2014) 22–34. doi:10.1016/j.cviu.2013.11.009.
  • (12) T. Bouwmans, A. Sobral, S. Javed, S. K. Jung, E.-H. Zahzah, Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset (2015). arXiv:1511.01245.
  • (13) M. R. Jovanović, P. J. Schmid, J. W. Nichols, Sparsity-promoting dynamic mode decomposition, Physics of Fluids (1994-present) 26 (2) (2014) 024103.
  • (14) S. G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on signal processing 41 (12) (1993) 3397–3415.
  • (15) J. A. Tropp, A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on information theory 53 (12) (2007) 4655–4666.
  • (16) R. Rubinstein, M. Zibulevsky, M. Elad, Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit, CS Technion 40 (8) (2008) 1–15.
  • (17)

    F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825–2830.

  • (18) D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory 52 (4) (2006) 1289–1306. doi:10.1109/TIT.2006.871582.
  • (19) E. J. Candès, M. B. Wakin, An introduction to compressive sampling, IEEE Signal Processing Magazine 25 (2) (2008) 21–30. doi:10.1109/MSP.2007.914731.
  • (20) R. G. Baraniuk, Compressive sensing, IEEE Signal Processing Magazine 24 (4) (2007) 118–120.
  • (21) E. Liberty, Simple and deterministic matrix sketching, in: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2013, pp. 581–588.
  • (22) D. P. Woodruff, Sketching as a tool for numerical linear algebra, Foundations and Trends in Theoretical Computer Science 10 (1-2) (2014) 1–157. doi:10.1561/0400000060.
  • (23) A. C. Gilbert, J. Y. Park, M. B. Wakin, Sketched SVD: Recovering spectral features from compressive measurements, arXiv preprint arXiv:1211.0361 (2012) 1–10.
  • (24) J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, J. N. Kutz, On dynamic mode decomposition: Theory and applications (2013). arXiv:1312.0041.
  • (25) M. Gavish, D. Donoho, The optimal hard threshold for singular values is , Information Theory, IEEE Transactions on 60 (8) (2014) 5040–5053. doi:10.1109/TIT.2014.2323359.
  • (26) F. Woolfe, E. Liberty, V. Rokhlin, M. Tygert, A fast randomized algorithm for the approximation of matrices, Applied and Computational Harmonic Analysis 25 (3) (2008) 335–366.
  • (27) D. Achlioptas, Database-friendly random projections: Johnson-Lindenstrauss with binary coins, Journal of computer and System Sciences 66 (4) (2003) 671–687.
  • (28) P. Li, T. J. Hastie, K. W. Church, Very sparse random projections, in: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2006, pp. 287–296.
  • (29) J. Nickolls, I. Buck, M. Garland, K. Skadron, Scalable parallel programming with CUDA, Queue 6 (2) (2008) 40–53. doi:10.1145/1365490.1365500.
  • (30) J. R. Humphrey, D. K. Price, K. E. Spagnoli, A. L. Paolini, E. J. Kelmelis, CULA: Hybrid GPU accelerated linear algebra routines (2010). doi:10.1117/12.850538.
  • (31) P. Carr, GPU accelerated multimodal background subtraction, in: Digital Image Computing: Techniques and Applications, IEEE, 2008, pp. 279–286.
  • (32) V. Pham, P. Vo, V. T. Hung, et al., GPU implementation of extended gaussian mixture model for background subtraction, in: IEEE International Conference on Computing and Communication Technologies, Research, Innovation, and Vision for the Future, 2010, pp. 1–4.
  • (33) Q. Lixia, S. Bin, L. Weiyao, W. Wen, S. Ruimin, GPU-accelerated video background subtraction using Gabor detector, Journal of Visual Communication and Image Representation 32 (2015) 1–9. doi:10.1016/j.jvcir.2015.07.010.
  • (34)

    Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, P. Ishwar, CDnet 2014: An expanded change detection benchmark dataset, in: IEEE Workshop on Computer Vision and Pattern Recognition, IEEE, 2014, pp. 393–400.

  • (35) A. Vacavant, T. Chateau, A. Wilhelm, L. Lequievre, A benchmark dataset for outdoor foreground/background extraction, in: Computer Vision–ACCV 2012 Workshops, Springer, 2013, pp. 291–300.
  • (36) M. W. Mahoney, Randomized algorithms for matrices and data, Foundations and Trends in Machine Learning 3 (2) (2011) 123–224. doi:10.1561/2200000035.
  • (37) N. B. Erichson, S. Voronin, S. L. Brunton, J. N. Kutz, Randomized matrix decompositions using R (2016). arXiv:1608.02148.
  • (38) T. Zhou, D. Tao, Godec: Randomized low-rank & sparse matrix decomposition in noisy case, in: International Conference on Machine Learning, ICML, 2011, pp. 1–8.
  • (39) D. Goldfarb, S. Ma, K. Scheinberg, Fast alternating linearization methods for minimizing the sum of two convex functions, Mathematical Programming 141 (1-2) (2013) 349–382. doi:10.1007/s10107-012-0530-2.
  • (40) F. D. la Torre, M. Black, A framework for robust subspace learning, International Journal of Computer Vision 54 (1-3) (2003) 117–142.
  • (41) J. N. Kutz, X. Fu, S. L. Brunton, Multiresolution dynamic mode decomposition, SIAM Journal on Applied Dynamical Systems 15 (2) (2016) 713–735.