Global Optimality Guarantees for Nonconvex Unsupervised Video Segmentation

07/09/2019 ∙ by Brendon G. Anderson, et al. ∙ berkeley college 0

In this paper, we consider the problem of unsupervised video object segmentation via background subtraction. Specifically, we pose the nonsemantic extraction of a video's moving objects as a nonconvex optimization problem via a sum of sparse and low-rank matrices. The resulting formulation, a nonnegative variant of robust principal component analysis, is more computationally tractable than its commonly employed convex relaxation, although not generally solvable to global optimality. In spite of this limitation, we derive intuitive and interpretable conditions on the video data under which the uniqueness and global optimality of the object segmentation are guaranteed using local search methods. We illustrate these novel optimality criteria through example segmentations using real video data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

One of the most fundamental problems in computer vision and machine learning is that of video object segmentation. In this domain, the general goal is to distinguish and extract objects of interest from the rest of the video’s content. Visual segmentation algorithms take on a variety of different tasks and forms. For instance, semantic segmentation tackles the problem of assigning each extracted object to a certain cluster or predefined class, and supervised (or semi-supervised) methods are endowed with one or more ground truth extractions or annotations

[1]. This wide range of methodologies makes video segmentation suitable for many applications, such as surveillance systems, traffic monitoring, and gesture recognition, and therefore video object segmentation remains an active and challenging area of research [2, 3].

This paper is concerned with nonsemantic and unsupervised video segmentation via background subtraction; the task of extracting moving objects from a video’s background using static cameras. Traditional techniques for moving object segmentation typically use Gaussian mixture models (GMM), which offer simple models but lack robustness

[4, 5]

. Neural networks have also found popularity due to the balance they strike between performance and computational efficiency

[6]. However, due to their nonconvex nature, neural networks do not generally possess guarantees on the global optimality of their resulting segmentation [7].

Recently, much attention has been placed on approaches based on robust principal component analysis (RPCA), which model the video as the sum of low-rank and sparse matrices. Perhaps the most notable of these methods is Principal Component Pursuit (PCP) introduced in the seminal paper by Candès et al. [8]. Although the convexified approach in PCP provides conditions under which exact recovery of the sparse components is guaranteed, its use of lifted variables results in scalability and computational hindrances [7].

In order to tackle large-scale segmentation problems, lower-dimensional nonconvex formulations such as nonnegative robust principal component analysis (NRPCA) and robust nonnegative matrix factorization (RNMF) have been proposed [9, 10, 11, 12]. These nonconvex approaches often permit parallelization, lending themselves to lowered computational cost and scalability to larger problems [13]. Furthermore, the nonnegative nature of grayscale pixel values is explicitly embedded in modern methods like NRPCA and RNMF, unlike many of the more traditional techniques. Although these nonconvex formulations have been empirically shown to have performance on par with the popular PCP method, previous works have focused on local optimality of the resulting video segmentations, often solved for by alternating over the subproblems that are convex in the variables separately [10, 11].

In this work, we aim to supplement the strong empirical and computational properties of video segmentation via nonconvex NRPCA by providing intuitive and interpretable global optimality guarantees. These guarantees target two key aspects of moving object segmentation. First, they promise global solutions when using local search algorithms, such as stochastic gradient descent or its variants. The computational efficiency of these simple algorithms is paramount in large-scale machine learning problems

[14, 15, 16], e.g., those with high-resolution video data. Second, safety-critical video segmentation applications, such as autonomous driving [17] and medical imaging [18], demand global optimality guarantees to promise consistent performance and safety margins. With the recent influx of studies on spurious local minima of nonconvex optimization problems [19, 20, 21], we approach this problem by exploiting new results on the benign landscape of rank-1 NRPCA [9]. Under this framework, we propose criteria under which the video segmentation is guaranteed to be unique and globally optimal.

The remainder of this paper is structured as follows. In Section II, we describe the problem and introduce our terminology and notations. In Section III, we show that the problem can be simplified to one in which the moving objects consist of elementary shapes. Then, in Sections IV and V, we derive conditions on video data under which global optimality guarantees can be made. Finally, we perform numerical experiments and make concluding remarks in Sections VI and VII.

Ii Problem Statement

Consider a video sequence of frames, each being pixels tall and pixels wide, where . (Note that, throughout the paper, we represent the nonnegative and positive orthants of by and , respectively. We use analogous definitions for and .) We denote the video frames by the matrices , where . By defining the pixel set as , the pixels of a grayscale video are given by

where conventionally, or . In this work, we scale the pixel values to an interval

, for technical reasons explained later. Vectorizing each frame of the video, we form the data matrix

where and . Note that converts each frame into an equivalent extended vector, so that the single matrix captures all of the video’s information. We also define the measurement set as .

We choose to model the video data matrix as the sum of two components. The first component is chosen to be a nonnegative rank-1 matrix, used to capture the relatively static behavior of the video’s background. The second component is a sparse matrix, taken to represent the dynamic foreground (i.e., the moving objects). Under this model, we seek the decomposition

(1)

where and , and is sparse. This can be solved for through the following nonconvex, nonnegative -minimization problem, termed in the literature as nonnegative robust principal component analysis (NRPCA) [9]:

(2)

This is a nonconvex problem that may generally have spurious local minima, i.e., those points a local search algorithm may find which do not correspond to the globally optimal solution. Note that we enforce nonnegativity of the optimization variables and , yielding natural interpretations as the video’s nominal background pattern and its associated scalings in each frame, respectively. Furthermore, we have added a regularization term with tuning parameter to our formulation, since the unregularized objective is invariant to scaling. In other words, if minimizes the unregularized problem, then so will for every . Therefore, under regularization, the unique solution should be the pair for which .

Under the decomposition (1), we define the video’s background set and foreground set as and , respectively. Accordingly, two bipartite graphs can be introduced, the background graph having edge set , and the foreground graph having edge set . The first vertex set of each graph corresponds to pixel numbers: . The second vertex set associates with frame numbers: . A toy example of these graphs follows.

Example 1.

Suppose that a video has frames given by and , where elements of 256 represent background. Then, the data matrix is

and the foreground and background sets are, respectively, and . The corresponding graphs are shown in Fig. 1.

(a)

(b)
Fig. 1: Example graphs (a), and (b).

Now, a remarkable property of the nonconvex and nonsmooth problem (2) is that, under certain conditions on the problem data, the optimization landscape is benign, i.e., there are no spurious local minima, and the global minimum is unique [9]. This permits the use of simple local search algorithms to solve (2) to global optimality. The nonconservative sufficient conditions for benign landscape follow:

Connectivity: (3)
Identifiability: (4)

In these expressions, we denote the globally optimal solution of (2) as , the condition number (maximum element divided by minimum element) of a vector in the positive orthant as , maximum degree of a graph as , and minimum degree of a graph as . The value is a constant that depends on problem data, which will be discussed in more detail later.

The problem to be addressed is as follows: When do videos satisfy the conditions (3) and (4) to guarantee a benign landscape for the optimization problem (2)? In other words, the goal is to determine conditions on the size, shape, and speed of a moving object to provide theoretical guarantees for the unique and globally optimal foreground segmentation of a video. We begin by showing that the problem can be simplified to one with elementary foreground shapes through the notion of object embedding.

Iii Object Embedding

In this section, we consider two videos with identical backgrounds, each having one moving object (though the results are naturally generalized to multi-object videos). We are interested in the case that the moving object of one video can be completely covered by the moving object of the other video in each frame. Here is the question of interest: If the video with the larger moving object satisfies the conditions (3) and (4) for benign landscape of (2), does the video with the smaller object also satisfy these conditions? To answer this question precisely, let us start with the following definition.

Definition 1 (Embedding).

Consider two videos and having the same background , i.e., and . We say that object is embedded in object if the foreground of video is a subset of that of video in every frame; if

where , and similarly for .

It is desirable to show that the answer to our earlier question is affirmative. We prove these implications in the following two propositions.

Proposition 1 (Embedded connectivity).

If object is embedded in object and video satisfies the connectivity condition (3), then video also satisfies the connectivity condition.

Proof.

Since is embedded in , we have for all that , which implies . This shows that the background of video is a subset of that of video , i.e., for all , which gives . Therefore, we have that is a spanning subgraph of . Since is connected by our assumption, so must be , as desired. ∎

Proposition 2 (Embedded identifiability).

If object is embedded in object and video satisfies the identifiability condition (4), then video also satisfies the identifiability condition.

Proof.

Since is embedded in , we have for all that , which implies . Hence, the maximum degrees of the foreground graphs satisfy . Similarly, we have that , and therefore the minimum degrees of the background graphs satisfy . Combining these inequalities with the identifiability inequality for video yields

showing video also satisfies the identifiability condition. ∎

It is clear that Propositions 1 and 2 are independent of the size, shape, and speed of a moving object. This allows us to restrict the rest of our analysis to videos with moving objects of elementary shapes, since a more complicated object may always be embedded into a larger object which covers it. In the case that the larger, simpler object is found to satisfy the conditions (3) and (4), the results of this section show the embedded object can be extracted to unique global optimality. Therefore, we will focus on rectangular moving objects for the remainder of the paper, for convenience.

Iv Conditions for Connectivity

In this section, we aim to derive necessary and sufficient criteria for a video to satisfy the connectivity condition (3). We will start by defining the notion of connected backgrounds, which will assist with streamlining the proofs in Sections IV-A and IV-B, in addition to granting intuitive interpretations to the conditions that follow.

Definition 2 (Background connectivity).

Given a video with frames having associated background pixel sets

the video is said to have a connected background if the following two conditions are satisfied:

  1. .

  2. for all and such that .

We now show that having a connected background is equivalent to the video’s background graph being connected; videos with connected backgrounds satisfy the connectivity condition (3). This is useful, since we will use Definition 2 to derive simple and intuitive necessary conditions a video must satisfy in order to have a connected background (and therefore to satisfy the connectivity condition). Afterwards, we prove a sufficient condition for background connectivity, which we claim is likely satisfied for nearly any video in practice.

Proposition 3 (Connectivity equivalence).

A video’s associated background graph, , is connected if and only if the video has a connected background.

Proof.

The proof will proceed via contrapositive argument. We will first prove necessity.

Necessity

Suppose that a video does not have a connected background. Then, one of the two following cases must hold:

  1. .

  2. There exist and , where , such that .

Assume that the first case holds. Then, there exists a pixel such that for all . Therefore, we have where , which implies

This shows that vertex has no incident edges in , and therefore the graph is disconnected.

Now, assume that the second case holds. We first note that , since otherwise and cannot be disjoint. Now, implies that for all pixels , either and , or and , or and . In the trivial case that some pixel is neither an element of nor an element of , then , and the first case above shows that the graph is disconnected. For pixels , we have for all , and therefore where , which implies

This shows that vertex is not adjacent to vertex for all . Similarly, one can show that for each , the corresponding vertex is not adjacent to vertex for all . Since and , the bipartite graph contains at least two connected components, defined by the disjoint edge sets and . Therefore, the graph is disconnected.

Sufficiency

Suppose that a video’s associated background graph, , is disconnected. Then, one of the two following cases must hold:

  1. There exists a vertex with no incident edges.

  2. Every vertex has at least one incident edge.

Assume that the first case holds. Then, either the isolated vertex corresponds to a pixel number or to a frame number . If vertex is isolated, then for all . This implies and therefore for all , where

(Here, represents the ceiling operator. This formula comes from the one-to-one correspondence between a pixel and its pixel number through the vectorization of a given video frame.) Thus, , which implies . Hence, the video does not have a connected background. On the other hand, if vertex is isolated, then for all . This implies and therefore for all . Thus, . Define and . Then , so again the video does not have a connected background.

Now, assume the second case holds. Then, the graph contains at least two nontrivial connected components. Therefore, the set , which defines the edge set of the graph, can be partitioned as , where and are nonempty, such that and . Now, define . This gives

Now, from the partitions and we see that a frame has only for pixel numbers . Thus, can be written equivalently as

Similarly, it can be shown that by defining , we obtain

Since and , we immediately see that . Therefore, the video does not have a connected background. ∎

Proposition 3 shows that the connectivity of the graph is entirely dictated by whether or not a video has a connected background. Therefore, we can use the notion of background connectivity to derive intuitive and meaningful criteria a video should satisfy in order to meet the connectivity condition (3).

Iv-a Necessary Conditions for Connectivity

From Definition 2, we develop three necessary conditions for background connectivity of a video, which are intuitively interpretable in terms of properties of the video (i.e., properties of pixels and frames). These necessary conditions give simple methods for showing when a video does not have a connected background, in which case no guarantees on the global optimality of the minimization (2) can be made.

Proposition 4 (Object size).

If a video has a connected background, then there are at most foreground pixels in the data matrix .

Proof.

Since the background graph has vertices and is connected, the number of edges is at least . Therefore, . ∎

For an instance in which the upper bound given by Proposition 4 is tight, yet background connectivity is still achieved, see Example 1. Perhaps the most interesting implication of this result comes from the following corollary.

Corollary 1.

As the video resolution and number of frames increase, the maximum relative size of recognizable objects increases.

Proof.

Since there can be at most foreground pixels across all frames of the video, the maximum relative size of an object can be expressed as

As the resolution of the video increases, , and therefore

Furthermore, as the length of the video increases, , and therefore

Thus, we see that the maximum permissible ratio of foreground pixels to background pixels increases with the video’s resolution and number of frames, as desired. ∎

Interestingly, the maximum relative object size also shows us that with frame (i.e., a single picture), the largest recognizable object size decreases to . On the other hand, with (i.e., a single pixel resolution), the largest recognizable object again decreases to . In other words, we cannot recognize moving objects with only one frame, even with infinite resolution, and we also cannot recognize objects with only one pixel, even with infinitely many frames. Both of these observations align with the restrictions on video properties one would expect.

Proposition 5 (Frame connectivity).

If a video has a connected background, then each frame contains at least one background pixel.

Proof.

Suppose that there exists a frame that contains no background pixels, i.e., . Then, the video’s background pixel sets can be partitioned as and . Thus, , and therefore the video does not have a connected background. ∎

Proposition 5 can be interpreted as the requirement that an object can at no point cover the entirety of the frame. This matches intuition, since a moving object surely cannot be uniquely segmented from its background in these types of frames. A similar necessary condition on the obscurement of pixels, rather than frames, is given in Proposition 6 that follows.

Proposition 6 (Pixel connectivity).

If a video has a connected background, then each pixel is a background pixel in at least one frame.

Proof.

Suppose that there exists a pixel that is a foreground pixel for all frames . Then, for all . Thus, , which implies , and therefore the video does not have a connected background. ∎

Proposition 6 shows that if any single pixel remains as part of the foreground throughout the video’s duration, we cannot guarantee benign landscape of (2). This makes sense intuitively: if part of the background remains obscured throughout the video’s entirety, it appears implausible to guarantee unique and globally optimal recovery of that part of the background.

Iv-B Sufficient Conditions for Connectivity

The necessary conditions derived in Section IV-A are most useful in determining when the global optimality guarantees for (2) fail to hold. In this section, we reverse the implications to derive a simple and relatively relaxed sufficient condition for ensuring the graph is connected. This leads to our first main result.

Theorem 1 (Common background pixel).

Suppose that each pixel of a video is a background pixel in at least one frame. If any single pixel is a background pixel in all frames of the video, then the video has a connected background.

Proof.

Since each pixel in the video is assumed to be a background pixel in at least one frame, we have that for every , there exists a such that . This implies , so the video satisfies the first condition for background connectivity.

Now, suppose that there exists a pixel such that is a background pixel in all frames of the video. Furthermore, assume that the background pixels are partitioned as and , where and are any two arbitrary subsets of such that . Since for all , it must be that and , and therefore . Since and are arbitrary partitions, the video satisfies the second condition for background connectivity. Thus, the video has a connected background. ∎

The sufficient condition given in Theorem 1 is relaxed in the sense that many videos satisfy the property of having at least one common background pixel among all frames. These common background pixels are often found in the corners of a video, away from the “action” of the moving objects. Therefore, with the prior knowledge that a single pixel remains unobscured by the moving objects throughout the duration of the video, the connectedness of the video’s background (and therefore the connectedness of ) comes at only the price of ensuring that no single pixel is obscured by foreground throughout the video’s entirety. This property is instantiated later in the example of Section VI. We now focus our attention on the identifiability inequality (4).

V Conditions for Identifiability

Recall the identifiability condition (4). The goal of this section is to determine what properties a video and its moving objects must possess in order to satisfy this condition. We make the following assumptions.

Assumption 1.

As supported by Propositions 1 and 2, we assume that the foreground is a rectangle, and that at least one frame contains the entire object. Furthermore, we define to be the maximum number of frames any pixel is obscured by the object. (Note that this can be directly computed for a variety of simple trajectories; see Remark 2.) We assume that there exists a black pixel in at least one frame, i.e., for some , and that . We also assume , as motivated in Section II. Additionally, we take for some

, which holds when the background remains constant through the video’s duration, and approximately holds when the illumination variance is small enough. Finally, we set

, which turns out to be a key assumption for deriving bounds on .

Remark 1.

(Data preprocessing) Various preprocessing techniques can be used to ensure the assumptions on the problem data. For instance, shifting each pixel value by ensures that . Furthermore, in a high-resolution video, we will typically find that . In this case, the equality can be achieved by either repeating the video to increase the overall length , or by compressing the video to lower the resolution . The first approach is beneficial in the case that full-resolution video is needed, whereas the second approach lowers the problem dimension and speeds up computation. To appropriately rescale the resolution, one may set the new frame dimensions to and , where . This is the approach we take in the experiments in Section VI.

We now prove the main result of this section.

Theorem 2 (Rectangle identifiability).

Suppose that a video satisfies Assumption 1. Then the video satisfies the identifiability condition (4) if and only if

(5)

where .

Proof.

As seen in the identifiability condition (4), there are four values to analyze: the condition number , the parameter , the maximum degree , and the minimum degree . We will first divide the proof into four separate computations, each dedicated to one of these values, then combine the final results at the end.

1 Condition number

Since and , we have

and so

Since by Assumption 1, we find . This implies and , and therefore . Hence,

Furthermore, since a background pixel must satisfy for all , we have and , and therefore

(6)

Taking with , as in Remark 1, we obtain . Therefore, taking large enough leads to

(7)

2 Parameter

Notice that the identifiability condition (4) depends on a parameter . This parameter is defined in [9] to be a value in the interval such that the following holds:

(8)

where and

The elements of therefore take on four forms:

  1. : We have .

  2. : We have .

  3. : We have , where by Assumption 1.

  4. : Analogous to the case above, we again find .

Since for all , we find that (8) is satisfied for . Therefore, we can choose

(9)

3 Foreground graph

Consider the graph and let us denote the degree of a vertex in as . Note that , exactly equals the number of frames in which pixel appears as foreground. Since, by Assumption 1, the maximum number of frames in which any single pixel appears as foreground is frames, we have

Next, we note that , exactly equals the number of foreground pixels in frame . By Assumption 1, at least one frame contains the entire object, and therefore the maximum number of foreground pixels in any given frame is

Therefore, we find that the maximum degree of the foreground graph becomes

(10)

4 Background graph

Consider the graph and let us denote the degree of a vertex in as . Since and are complements with respect to , we have that and are bipartite complements of one another. Hence, it must be that

for all and . This, together with the analysis of the foreground graph above, yields

Therefore, we find that the minimum degree of the background graph becomes

(11)

Combining the results of the four computations above by substituting (7), (9), (10), and (11) into (4), we find that the identifiability condition is equivalent to

(12)

Since , we find , so this gives

This is equivalent to the proposed set of inequalities (5). Hence, the conditions we provide relating the video length to the size and speed of the object are seen to be necessary and sufficient, as desired. ∎

Remark 2 (Constant trajectory).

Take the special case of an object moving horizontally at a constant speed of pixels per frame and vertically at a constant speed of pixels per frame. Then, the number of frames in which any single pixel can be considered as foreground is no more than , and is also no more than . Assuming the object moves a sufficient distance so as to not obscure any part of the background for the entirety of the video, we have , so one of the two proposed bounds is active. Hence, the maximum number of frames in which a single pixel appears as foreground becomes

(13)

giving bounds directly in terms of the object’s size and speed.

Theorem 2 provides us necessary and sufficient conditions to guarantee the satisfaction of the identifiability condition (4) in terms of a rectangular object’s size and trajectory in relation to the resolution and length of the video. As one’s intuition may predict, smaller rectangles and longer videos relax these conditions, indicating that videos with small moving objects and many frames are inherently easier to achieve globally optimal video segmentation. Together with Theorem 1, we can provide deterministic guarantees that the optimization problem (2) used to decompose a video has benign landscape, and that the resulting decomposition is unique and globally optimal. These concepts are showcased in the following video segmentation example.

Vi Numerical Experiments

In this section, we perform moving object segmentation via NRPCA on an example video in an effort to corroborate the two main results given in Theorems 1 and 2. For this experiment, we recorded five minutes of surveillance video on the UC Berkeley campus, instructing volunteer human subjects to walk up and down a set of stairs, acting as the moving object to segment. The original video is frames long with a resolution of pixels by pixels. We preprocess the video data so that , as described in Assumption 1 and Remark 1. Therefore, the NRPCA problem takes on approximately variables, whereas the popular convex PCP segmentation approach would optimize over an astronomical variables after lifting the problem to higher dimensions. We also shift the pixel values in from to the interval in order to guarantee is sufficiently close to unity (in this case, by (6)).

In order to solve the NRPCA problem, we set and initialize a point , with each element of