Greedy Feature Selection for Subspace Clustering

03/19/2013 ∙ by Eva L. Dyer, et al. ∙ Carnegie Mellon University Rice University 0

Unions of subspaces provide a powerful generalization to linear subspace models for collections of high-dimensional data. To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data. Recently, sparse recovery methods have been shown to provide a provable and robust strategy for exact feature selection (EFS)--recovering subsets of points from the ensemble that live in the same subspace. In parallel with recent studies of EFS with L1-minimization, in this paper, we develop sufficient conditions for EFS with a greedy method for sparse signal recovery known as orthogonal matching pursuit (OMP). Following our analysis, we provide an empirical study of feature selection strategies for signals living on unions of subspaces and characterize the gap between sparse recovery methods and nearest neighbor (NN)-based approaches. In particular, we demonstrate that sparse recovery methods provide significant advantages over NN methods and the gap between the two approaches is particularly pronounced when the sampling of subspaces in the dataset is sparse. Our results suggest that OMP may be employed to reliably recover exact feature sets in a number of regimes where NN approaches fail to reveal the subspace membership of points in the ensemble.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 13

page 21

page 23

Code Repositories

ssc_mps_py

Python implementation of SSC-OMP and SSC-MP


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Unions of Subspaces

With the emergence of novel sensing systems capable of acquiring data at scales ranging from the nano to the peta, modern sensor and imaging data are becoming increasingly high-dimensional and heterogeneous. To cope with this explosion of high-dimensional data, one must exploit the fact that low-dimensional geometric structure exists amongst collections of data.

Linear subspace models are one of the most widely used signal models for collections of high-dimensional data, with applications throughout signal processing, machine learning, and the computational sciences. This is due in part to the simplicity of linear models but also due to the fact that principal components analysis (PCA) provides a closed-form and computationally efficient solution to the problem of finding an optimal low-rank approximation to a collection of data (an ensemble of signals in

). More formally, if we stack a collection of vectors (points) in into the columns of , then PCA finds the best rank- estimate of by solving

(1)

In many cases, a linear subspace model is sufficient to characterize the intrinsic structure of the ensemble; however, in many emerging applications, a single subspace is not enough. Instead, ensembles must be modeled as living on a union of subspaces or a union of affine planes of mixed or equal dimension. Ensembles ranging from collections of images taken of objects under different illumination conditions (Basri and Jacobs, 2003; Ramamoorthi, 2002), motion trajectories of point-correspondences (Kanatani, 2001), to structured sparse and block-sparse signals (Lu and Do, 2008; Blumensath and Davies, 2009; Baraniuk et al., 2010)

are all well-approximated by a union of low-dimensional subspaces or a union of affine hyperplanes. Union of subspace models have also found utility in the classification of signals collected from complex and adaptive systems at different instances in time, e.g., electrical signals collected from the brain’s motor cortex

(Gowreesunker et al., 2011).

Unions of subspaces provide a natural extension to single subspace models, but providing an extension of PCA that leads to provable guarantees for learning multiple subspaces is challenging. This is due to the fact that segmentation—the identification of points that live in the same subspace—and subspace estimation must be performed simultaneously (Vidal et al., 2005; Vidal, 2011). However, if we can accurately sift through the points in the ensemble and determine which points lie along or near the same subspace, then subspace estimation becomes trivial. For this reason, many state-of-the art methods for learning unions of subspaces rely on first forming local subspace estimates111A local subspace estimate is a low-rank approximation formed from a subset of points in the ensemble, rather than from the entire collection of data. from a subset of points in the data (Vidal, 2011; Elhamifar and Vidal, 2013).

A common heuristic used to obtain local subspace estimates is to select points that lie within an Euclidean neighborhood of one another (or a fixed number of nearest neighbors (NNs)) and then form a local estimate from the set of NNs. At a high-level, most NN-based approaches for subspace clustering can be summarized as consisting roughly of the following three steps:

  • For the point in the set, , select a set of points from the ensemble that live within an -radius from in terms of their Euclidean distance. Denote this subset of points , where is an index set containing the indices of all the neighbors of .

  • Form a low-rank PCA estimate by solving (1) for the points in the sub-matrix .

  • Compute the

    subspace affinity matrix

    for the ensemble, where the entry of the matrix represents the likelihood that and live close to the same subspace or whether and produce similar local subspace estimates.

Methods that use NN sets to form local subspace estimates from the data include local subspace affinity (LSA) (Yan and Pollefeys, 2006)

, spectral clustering based on locally linear approximations

(Arias-Castro et al., 2011), spectral curvature clustering (Chen and Lerman, 2009), and local best-fit flats (Zhang et al., 2012). The main differences between these methods lie either in the way that the entries of the affinity matrix are computed in Step 3 or the way in which this matrix is used to obtain an estimate of the underlying subspace structures present in the ensemble. In the case of approaches built upon spectral clustering (Shi and Malik, 2000; Ng et al., 2002), one performs spectral clustering on the subspace affinity matrix for the ensemble in order to cluster the data into different subspaces. In the case of consensus-based approaches, one finds a robust estimate of the mode of the local subspace estimates formed in Step 2; this problem can also be posed as an optimization on the subspace affinity matrix. We point the reader to (Vidal, 2011) for a thorough review of methods for subspace clustering.

When the subspaces present in the ensemble are linearly separable or non-intersecting, local subspace estimates formed from NNs provide relatively reliable and stable estimates of the subspaces present in the ensemble. However, neighborhood-based approaches quickly fail as the separation between the two structures decreases and as the subspace dimension increases relative to the number of points in each subspace. This is due in part to the fact that, as the dimension of the intersection between two subspaces increases, the Euclidean distance between points becomes a poor predictor of which points belong to the same subspace. Thus, we seek an alternative strategy for forming a local estimate that does not rely solely on whether points in the same subspace live in a local Euclidean neighborhood. Instead, our goal is to identify another strategy for “feature selection” that returns sets of points (feature sets) that lie along the same subspace.

1.2 Exact Feature Selection

Instead of computing local subspace estimates from sets of NNs, Elhamifar and Vidal (2009) propose a novel approach for feature selection based upon forming sparse representations of the data via -minimization. The main intuition underlying their approach is that when a sparse representation of a point is formed with respect to the remaining points in the ensemble, the representation should only consist of other points that belong to the same subspace. When a sparse representation consists of points that lie in the same subspace, we say that exact feature selection (EFS) occurs. Under certain assumptions on both the sampling and “distance between subspaces”,222The distance between a pair of subspaces is typically measured with respect to the principal angles between the subspaces or other related distances on the Grassmanian manifold. this approach to feature selection leads to provable guarantees that EFS will occur (Elhamifar and Vidal, 2010; Soltanolkotabi and Candès, 2012), even when the subspaces intersect.

We refer to this application of sparse recovery as endogenous sparse recovery due to the fact that representations are not formed from an external collection of primitives (such as a basis or dictionary) but are formed “from within” the data. Formally, for a set of signals , each of dimension , the sparsest representation of the point is defined as

(2)

where counts the number of non-zeroes in its argument. Let denote the subset of points selected to represent the point and denote the contribution of the point to the endogenous representation of . By penalizing representations that require a large number of non-zero coefficients, the resulting representation will be sparse.

In general, finding the sparsest representation of a signal has combinatorial complexity; thus, sparse recovery methods such as basis pursuit (BP) (Chen et al., 1998) or low-complexity greedy methods (Davis et al., 1994) are employed to obtain approximate solutions to (2).

1.3 Contributions

In Elhamifar and Vidal (2010), the authors show that when subspaces are disjoint (intersect only at the origin) and the minimum principal angle between subspaces is sufficiently large, the points selected by BP will belong to the same subspace, i.e., EFS is guaranteed. Recently, Soltanolkotabi and Candès (2012) developed guarantees for EFS with BP from unions of intersecting subspaces.

In parallel with recent developments for subspace clustering with BP, in this paper, we study EFS with a low-complexity and greedy method for sparse signal recovery known as orthogonal matching pursuit (OMP). The main result of our analysis is a new geometric condition (Thm. 1) for EFS that highlights the tradeoff between the: mutual coherence or similarity between points living in different subspaces and the covering radius of the points within a common subspace. The covering radius can be interpreted as the radius of the largest ball that can be embedded within each subspace without touching a point in the ensemble; the vector that lies at the center of this open ball, or the vector in the subspace that attains the covering radius is referred to as a deep hole. Thm. 1 suggests that subspaces can be arbitrarily close to one another and even intersect, as long as the data is distributed “nicely” along each subspace. By “nicely”, we mean that the points that lie on each subspace do not cluster together, leaving large gaps in the sampling of the underlying subspace. In Fig. 1, we illustrate the covering radius of a set of points on the sphere (the deep hole is denoted by a star).

Figure 1: Covering radius of points in a normalized subspace. The interior of the antipodal convex hull of points in a normalized subspace—a subspace of mapped to the unit -sphere—is shaded. The vector in the normalized subspace (unit circle) that attains the covering radius (deep hole) is marked with a star: when compared with the convex hull, the deep hole coincides with the maximal gap between the convex hull and the set of all vectors that live in the normalized subspace.

After introducing a general geometric condition for EFS, we extend this analysis to the case where the data live on what we refer to as an uniformly bounded union of subspaces (Thm. 3). In particular, we show that when the points living in a particular subspace are incoherent with the principal vectors that support pairs of subspaces in the ensemble, EFS can be guaranteed, even when non-trivial intersections exist between subspaces in the ensemble. Our condition for bounded subspaces suggests that, in addition to properties related to the sampling of the subspace, one can characterize the separability of pairs of subspaces by examining the correlation between the dataset and the unique set of principal vectors that support pairs of subspaces in the ensemble.

In addition to providing a theoretical analysis of EFS with OMP, the other main contribution of this work is revealing the gap between nearest neighbor-based (NN) approaches and sparse recovery methods, i.e., OMP and BP, for feature selection. In both synthetic and real world experiments, we observe that while both NN and sparse recovery methods have comparable rates of EFS when the subspaces are densely sampled, when the subspaces are sparsely sampled, sparse recovery methods provide significant advantages over NN. These empirical results point to an advantage of forming sparse representations from within the data; sparse recovery methods provide a natural way to reveal the subspace affinity amongst points that might be far away from one another in a Euclidean sense. By exploiting non-local relationships between points, sparse recovery methods are capable of providing reliable subspace estimates with far fewer points than neighborhood-based estimates. See Fig. 2 for an example of the affinity matrices formed from pairs of face subspaces where the goal is to separate points that live on different “illumination subspaces”.


Figure 2: Comparison of subspace affinity matrices for illumination subspaces. In each row, we display the subspace affinity matrices obtained for a different pair of illumination subspaces in the dataset, for NN (left), BP (middle), and OMP (right). To the left of the affinity matrices, we display an exemplar image from each illumination subspace.

1.4 Paper Organization

We now provide a roadmap for the rest of the paper.

Section 2. We introduce our signal model, the sparse subspace clustering (SSC) algorithm introduced in (Elhamifar and Vidal, 2009), and describe how OMP may be used for feature selection in subspace clustering.

Section 3 and 4. We develop the main theoretical results of this paper and provide new geometric insights into EFS from unions of subspaces. We introduce sufficient conditions for EFS to occur with OMP for general unions of subspaces in Thm. 1, disjoint unions in Cor. 1, and uniformly bounded unions in Thm. 3.

Section 5. We conduct a number of numerical experiments to validate our theory and compare sparse recovery methods with NN-based feature selection. Experiments are provided for both synthetic and real data.

Section 6. We discuss the implications of our theoretical analysis and empirical results on sparse approximation, dictionary learning, and compressive sensing. We conclude with a number of interesting open questions and future lines of research.

Section 7. We supply the proofs of the results contained in Sections 3 and 4.

1.5 Notation

In this paper, we will work solely in real finite-dimensional vector spaces, . We write vectors in lowercase script, matrices in uppercase script, and scalar entries of vectors as . The standard -norm is defined as

where . The “-norm” of a vector is defined as the number of non-zero elements in . The support of a vector , often written as , is the set containing the indices of its non-zero coefficients; hence, . We denote the Moore-Penrose pseudoinverse of a matrix as . If then , where we obtain by taking the reciprocal of the entries in , leaving the zeros in their places, and taking the transpose. An orthonormal basis (ONB) that spans the subspace of dimension satisfies the following two properties: and , where is the identity matrix. Let denote an ortho-projector onto the subspace spanned by the sub-matrix .

2 Greedy Feature Selection for Subspace Clustering

In this section, we introduce our signal model, detail the sparse subspace clustering (SSC) method developed by Elhamifar and Vidal (2009), and discuss the use of orthogonal matching pursuit for feature selection in subspace clustering.

2.1 Signal Model

Given a set of subspaces of , , each of dimension , we generate a “subspace cluster” by sampling points from the subspace . Let denote the set of points in the subspace cluster and let denote the union of these subspace clusters. Each point in is mapped to the unit sphere to generate a union of normalized subspace clusters. Let

denote the resulting set of unit norm points and let be the set of unit norm points that lie in the span of subspace . Let denote the set of points in with the points in excluded.

Let denote the matrix of normalized data, where each point in is stacked into the columns of . The points in can be expanded in terms of an ONB that spans and subspace coefficients , where . Let denote the matrix containing the points in with the submatrix excluded.

2.2 Sparse Subspace Clustering

In (Elhamifar and Vidal, 2009), the authors propose a novel approach to subspace clustering called sparse subspace clustering (SSC) that employs a relaxation of the -minimization problem in (2). To be precise, the SSC algorithm proceeds by solving the following basis pursuit (BP) problem (Chen et al., 1998) for each point in :

(3)

After finding the solution to this -minimization problem for each point in the ensemble, each -dimensional feature vector is placed into the row or column of and spectral clustering (Shi and Malik, 2000; Ng et al., 2002) is performed on the graph Laplacian of the affinity matrix .

In (Elhamifar and Vidal, 2013), the authors provide an extension of SSC to the case where the data might not admit an exact representation with respect to other points in the ensemble. In this case, they employ an inequality constrained version of BP known as basis pursuit denoising (BPDN) for feature selection; for each point , they solve the following problem

(4)

where is a parameter that is selected based upon the amount of noise in the data. Recently, Wang and Xu (2013) provided an analysis of EFS for a variant of the formulation in (4) for noisy unions of subspaces. In (Soltanolkotabi et al., 2013), the authors propose a robust procedure for subspace clustering from noisy data that extends the BPDN framework studied in (Elhamifar and Vidal, 2013; Wang and Xu, 2013).

2.3 Greedy Feature Selection

Instead of solving the sparse recovery problem in (2) via -minimization, as originally proposed in SSC, we will study the behavior of a low-complexity method for sparse feature selection known as orthogonal matching pursuit (OMP). We detail the OMP algorithm in Alg. 1.

For each point , we solve Alg. 1 to obtain a -sparse representation of the signal with respect to the remaining points in . The output of the OMP algorithm is a feature set, , which indexes the columns in selected to form an endogenous representation of . After computing feature sets for each point in the dataset, either a spectral clustering method or a consensus-based method may then be employed. In (Dyer, 2011), we introduce a consensus-based algorithm for subspace clustering that uses OMP for feature selection and also provide an empirical study of different feature selection strategies for both consensus and spectral clustering-based approaches on both synthetic and real world data (Chap. 7–8).

To form the subspace affinity for the dataset, a -dimensional sparse feature vector is computed by stacking the -dimensional projection into the entries of indexed by the feature set , where is the pseudoinverse of the submatrix . The remaining entries in are set to zero. The feature vector is then stacked in the row of and the subspace affinity matrix of the ensemble is computed as . Finally, spectral clustering is performed either on the graph Laplacian or the normalized graph Laplacian of .

Although OMP is known to be suboptimal for standard applications of sparse signal recovery, our empirical results provided in Section 5.4 suggest that OMP provides a low-complexity alternative to -minimization methods for SSC. An obvious advantage of using greedy methods is that they exhibit reduced computational complexity when compared to convex optimization-based approaches, thus enabling their use for feature selection from large collections of data. In addition, we find that despite the fact that BPDN provides better rates of EFS than OMP when we carefully tune the noise parameter , OMP provides comparable (and in some cases better) clustering performance than BPDN for the same choice of . These empirical results suggest that OMP offers a powerful low-complexity alternative to -minimization for feature selection in SSC. We point the reader to Fig. 2 for an example of the affinity matrices obtained via OMP, BP, and NN for collections of images of faces under different lighting conditions.

  Input: Input signal , a matrix containing signals in its columns, and a stopping criterion (either the sparsity or the approximation error ).
  Output: An index set containing the indices of all atoms selected in the pursuit.
  Initialize: Set the residual to the input signal .
  1. Select the atom that is maximally correlated with the residual and add it to
  2. Update the residual by projecting into the space orthogonal to the span of
  3. Repeat steps (1)–(2) until the stopping criterion is reached, e.g., either or the norm of the residual .
Algorithm 1 : Orthogonal Matching Pursuit

3 Exact Feature Selection from Unions of Subspaces

In this section, we provide a formal definition of EFS and develop sufficient conditions that guarantee that EFS will occur for all of the points contained within a particular subspace cluster.

3.1 Exact Feature Selection

In order to guarantee that OMP returns a sample set that yields an accurate local subspace estimate, we will be interested in determining when the feature set returned by Alg. 1 only contains points that belong to the same subspace cluster; in this case, we say that exact feature selection (EFS) occurred. EFS provides a natural condition for studying performance of both subspace consensus and spectral clustering methods due to the fact that when EFS occurs for a point, this results in a local subspace estimate that coincides with one of the true subspaces contained within the data. We now supply a formal definition of EFS.

Definition 1 (Exact feature selection)

Let index the set of points in that live in the span of subspace , where is a projector onto the span of subspace . For a point with feature set , if , we say that contains exact features.

3.2 Geometric Conditions for EFS

3.2.1 Preliminaries

Our main result in Thm. 1 below requires measures of both the distance between points in different subspace clusters and within the same subspace cluster. A natural measure of the similarity between points living in different subspaces is the mutual coherence. A formal definition of the mutual coherence is provided below in Def. 2.

Definition 2

The mutual coherence between the points in the sets is defined as

(5)

In words, the mutual coherence provides a point-wise measure of the normalized inner product (coherence) between all pairs of points that lie in two different subspace clusters.

Let denote the maximum mutual coherence between the points in and all other subspace clusters in the ensemble, where

A related quantity that provides an upper bound on the mutual coherence is the cosine of the first principal angle between the subspaces. The first principal angle between subspaces and , is the smallest angle between a pair of unit vectors drawn from . Formally, the first principal angle is defined as

(6)

Whereas the mutual coherence provides a measure of the similarity between a pair of unit norm vectors that are contained in the sets and , the cosine of the minimum principal angle provides a measure of the similarity between a pair of unit norm vectors that lie in the span of . For this reason, the cosine of the first principal angle provides an upper bound on the mutual coherence. The following upper bound is in effect for each pair of subspace clusters in the ensemble:

(7)

To measure how well points in the same subspace cluster cover the subspace they live on, we will study the covering radius of each normalized subspace cluster relative to the projective distance. Formally, the covering radius of the set is defined as

(8)

where the projective distance between two vectors and is defined relative to the acute angle between the vectors

(9)

The covering radius of the normalized subspace cluster can be interpreted as the size of the largest open ball that can be placed in the set of all unit norm vectors that lie in the span of , without touching a point in .

Let denote a pair of points that attain the maximum covering diameter for ; is referred to as a deep hole in along . The covering radius can be interpreted as the sine of the angle between the deep hole and its nearest neighbor . We show the geometry underlying the covering radius in Fig. 1.

In the sequel, we will be interested in the maximum (worst-case) covering attained over all sets formed by removing a single point from . We supply a formal definition below in Def. 3.

Definition 3

The maximum covering diameter of the set along the subspace is defined as

Hence, the covering radius equals .

A related quantity is the inradius of the set , or the cosine of the angle between a point in and any point in that attains the covering radius. The relationship between the covering diameter and inradius is given by

(10)

A geometric interpretation of the inradius is that it measures the distance from the origin to the maximal gap in the antipodal convex hull of the points in . The geometry underlying the covering radius and the inradius is displayed in Fig. 1.

3.2.2 General Result for EFS

We are now equipped to state our main geometric result for EFS with OMP. The proof is contained in Section 7.1.

Theorem 1

Let denote the maximal covering diameter of the subspace cluster as defined in Def. 3. A sufficient condition for EFS to occur for all points in is that the mutual coherence

(11)

where is the minimum principal angle defined in (6).

In words, this condition requires that the mutual coherence between points in different subspaces is less than the difference of two terms that both depend on the covering radius of points along a single subspace. The first term on the RHS of (11) is equal to the inradius, as defined in (10); the inradius provides a measure of the coherence a points that attains the covering radius of the subspace cluster and its nearest neighbor in . The second term on the RHS of (11) is the product of the cosine of the minimum principal angle between pairs of subspaces in the ensemble and the covering diameter of the points in .

When subspaces in the ensemble intersect, i.e., , condition (11) in Thm. 1 can be simplified to

(12)

In this case, EFS can be guaranteed as long as the points in different subspace clusters are bounded away from intersections between subspaces. When the covering radius shrinks to zero, Thm. 1 requires that , or that points from different subspaces do not lie exactly in the subspace intersection, i.e., are identifiable from one another.

3.2.3 Geometry underlying EFS

The main idea underlying the proof of Thm. 1 is that, at each iteration of Alg. 1, we require that the residual used to select a point to be included in the feature set is closer to a point in the correct subspace cluster () than a point in an incorrect subspace cluster (). To be precise, we require that the normalized inner product of the residual signal and all points outside of the correct subspace cluster

(13)

at each iteration of Alg. 1. To provide the result in Thm. 1, we require that (13) holds for all , or all possible residual vectors.

A geometric interpretation of the EFS condition in Thm. 1 is that the orthogonal projection of all points outside of a subspace must lie within the antipodal convex hull of the set of normalized points that span the subspace. To see this, consider the projection of the points in onto . Let denote the point on subspace that is closest to the signal ,

We can also write this projection in terms of a orthogonal projection operator , where is an ONB that spans and .

Figure 3: Geometry underlying EFS. A union of two disjoint subspaces of different dimension: the antipodal convex hull of a set of points (red circles) living on a 2D subspace is shaded (green). In (a), we show an example where EFS is guaranteed—the projection of points along the 1D subspace lie inside the shaded convex hull of points in the plane. In (b), we show an example where EFS is not guaranteed—the projection of points along the 1D subspace lie outside the shaded convex hull.

By definition, the normalized inner product of the residual with points in incorrect subspace clusters is upper bounded as

(14)

Thus to guarantee EFS, we require that the cosine of the angle between all signals in and their projection onto is less than the inradius of . Said another way, the EFS condition requires that the length of all projected points be less than the inradius of .

In Fig. 3, we provide a geometric visualization of the EFS condition for a union of disjoint subspaces (union of a 1D subspace with a 2D subspace). In (a), we show an example where EFS is guaranteed because the projection of the points outside of the 2D subspace lie well within the antipodal convex hull of the points along the normalized 2D subspace (ring). In (b), we show an example where EFS is not guaranteed because the projection of the points outside of the 2D subspace lie outside of the antipodal convex hull of the points along the normalized 2D subspace (ring).

3.2.4 EFS for Disjoint Subspaces

When the subspaces in the ensemble are disjoint, i.e., , Thm. 1 can be simplified further by using the bound for the mutual coherence in (7). This simplification results in the following corollary.

Corollary 1

Let denote the first principal angle between disjoint subspaces and , and let denote the maximal covering diameter of the points in . A sufficient condition for EFS to occur for all points in is that

(15)

3.3 Connections to Previous Work

In this section, we will connect our results for OMP with previous analyses of EFS with BP provided in (Elhamifar and Vidal, 2010; Soltanolkotabi and Candès, 2012) for disjoint and intersecting subspaces respectively. Following this, we will contrast the geometry underlying EFS with exact recovery conditions used to guarantee support recovery for both OMP and BP (Tropp, 2004, 2006).

3.3.1 Subspace Clustering with BP

In (Elhamifar and Vidal, 2010), the authors develop the following sufficient condition for EFS to occur for BP from a union of disjoint subspaces,

(16)

where is the set of all full rank sub-matrices of the data matrix and

is the minimum singular value of the sub-matrix

. Since we assume that all of the data points have been normalized, ; thus, the best case result that can be obtained is that the minimum principal angle, . This suggests that the minimum principal angle of the union must go to zero, i.e., the union must consist of orthogonal subspaces, as the subspace dimension increases.

In contrast to the condition in (16), the conditions we provide in Thm. 1 and Cor. 1 do not depend on the subspace dimension. Rather, we require that there are enough points in each subspace to achieve a sufficiently small covering; in which case, EFS can be guaranteed for subspaces of any dimension.

In (Soltanolkotabi and Candès, 2012), the authors develop the following sufficient condition for EFS from unions of intersecting subspaces with BP:

(17)

where the matrix contains the dual directions (the dual vectors for each point in embedded in ) in its columns,333See Def. 2.2 in Soltanolkotabi and Candès (2012) for a formal definition of the dual directions and insight into the geometry underlying their guarantees for EFS via BP. and is the inradius as defined in (10). In words, (17) requires that the maximum coherence between any point in and the dual directions contained in be less than the inradius of the points in .

To link the result in (17) to our guarantee for OMP in Thm. 1, we observe that while (17) requires that be less than the inradius, Thm. 1 requires that the mutual coherence be less than the inradius minus an additional term that depends on the covering radius. For an arbitrary set of points that live on a union of subspaces, the precise relationship between the two coherence parameters (coherence between two points in different subspace clusters) and (coherence between a point in a subspace cluster and the dual directions of points in a different subspace cluster) is not straightforward; however, when the points in each subspace cluster are distributed uniformly and at random along each subspace, the dual directions will also be distributed uniformly along each subspace.444This approximation is based upon personal correspondence with M. Soltankotabi, one of the authors that developed the results for EFS with BP in Soltanolkotabi and Candès (2012). In this case, will be roughly equivalent to the mutual coherence .

This simplification reveals the connection between the result in (17) for BP and the condition in Thm. 1 for OMP. In particular, when , our result for OMP requires that the mutual coherence is smaller than the inradius minus an additional term that is linear in the covering diameter . For this reason, our result in Thm. 1 is more restrictive than the result in (17). The gap between the two bounds shrinks to zero only when the minimum principal angle or when the covering diameter .

In our empirical studies, we find that when BPDN is tuned to an appropriate value of the noise parameter , BPDN does in fact provide higher rates of EFS than OMP.555While BPDN provides higher rates of EFS than OMP when we employ a homotopy approach to find an optimal value of the noise parameter , for a wide range of values of , BPDN and OMP provide comparable rates of EFS. This suggests that the theoretical gap between the two approaches might not be an artifact of our current analysis; rather, there might exist an intrinsic gap between the performance of each method with respect to EFS. Nonetheless, an interesting finding from our empirical study in Section 5.4, is that despite the fact that BPDN provides better rates of EFS than OMP, OMP often provides better clustering results than BPDN. For these reasons, we maintain that OMP offers a powerful low-complexity alternative to -minimization approaches for feature selection.

3.3.2 Exact Recovery Conditions for Sparse Recovery

To provide further intuition about EFS in endogenous sparse recovery, we will compare the geometry underlying the EFS condition with the geometry of the exact recovery condition (ERC) for sparse signal recovery methods provided in (Tropp, 2004, 2006).

To guarantee exact support recovery for a signal which has been synthesized from a linear combination of atoms from the sub-matrix , we must ensure that we can recover an approximation of that consists solely of atoms from . Let denote the set of atoms in that are not indexed by the set . The exact recovery condition (ERC) provided below in Thm.2 is sufficient to guarantee that we obtain exact support recovery for both BP and OMP.

Theorem 2

(Tropp, 2004) For any signal supported over the sub-dictionary , exact support recovery is guaranteed for both OMP and BP if

(18)

A geometric interpretation of the ERC is that it provides a measure of how far a projected atom outside of the set lies from the antipodal convex hull of the atoms in . When a projected atom lies outside of the antipodal convex hull formed by the set of points in the sub-dictionary , then the ERC condition is violated and support recovery is not guaranteed. For this reason, the ERC requires that the maximum coherence between the atoms in is sufficiently low or that is incoherent.

While the ERC condition requires a global incoherence property on all of the columns of , we can interpret EFS as requiring a local incoherence property. In particular, the EFS condition requires that the projection of atoms in an incorrect subspace cluster onto must be incoherent with any deep holes in along . In addition, we need that the points within a subspace cluster are coherent in order to produce a small covering radius.

4 EFS for Uniformly Bounded Unions of Subspaces

In this section, we study the connection between EFS and the higher-order principal angles (beyond the minimum angle) between pairs of intersecting subspaces.

4.1 Subspace Distances

To characterize the “distance” between pairs of subspaces in the ensemble, the principal angles between subspaces will prove useful. As we saw in the previous section, the first principal angle between subspaces and of dimension and is defined as the smallest angle between a pair of unit vectors drawn from . The vector pair that attains this minimum is referred to as the first set of principal vectors. The second principal angle is defined much like the first, except that the second set of principal vectors that define the second principal angle are required to be orthogonal to the first set of principal vectors . The remaining principal angles are defined recursively in this way. The sequence of principal angles, , is non-decreasing and all of the principal angles lie between .

The definition above provides insight into what the principal angles/vectors tell us about the geometry underlying a pair of subspaces; in practice, however, the principal angles are not computed in this recursive manner. Rather, a computationally efficient way to compute the principal angles between two subspaces and is to first compute the singular values of the matrix , where is an ONB that spans subspace . Let denote the SVD of and let denote the singular values of , where is the minimum dimension of the two subspaces. The smallest principal angle is related to the largest entry of via the following relationship, . For our subsequent discussion, we will refer to the singular values of as the cross-spectra of the subspace pair ().

A pair of subspaces is said to be disjoint if the minimum principal angle is greater than zero. Non-disjoint or intersecting subspaces are defined as subspaces with minimum principal angle equal to zero. The dimension of the intersection between two subspaces is equivalent to the number of principal angles equal to zero or equivalently, the number of entries of the cross-spectra that are equal to one. The overlap between two subspaces is defined as the or equivalently, , where ).

4.2 Conditions for EFS from Bounded Unions

The sufficient conditions for EFS in Thm. 1 and Cor. 1 reveal an interesting relationship between the covering radius and the minimum principal angle between pairs of subspaces in the ensemble. However, we have yet to reveal any dependence between EFS and higher-order principal angles. To make this connection more apparent, we will make additional assumptions about the distribution of points in the ensemble, namely that the dataset produces an uniformly bounded union of subspaces relative to the principal vectors supporting pairs of subspaces in the ensemble.

Let denote a collection of unit-norm data points, where and contain the points in subspaces and , respectively. Let denote the SVD of , where and let denote the set of left principal vectors of that are associated with the nonzero singular values in . Similarly, let denote the set of right principal vectors of that are associated with the nonzero singular values in . When the points in each subspace are incoherent with the principal vectors in the columns of and , we say that the ensemble is an uniformly bounded union of subspaces. Formally, we require the following incoherence property holds:

(19)

where is the entry-wise maximum and . This property requires that the inner products between the points in a subspace and the set of principal vectors that span non-orthogonal directions between a pair of subspaces is bounded by a fixed constant.

When the points in each subspace are distributed such that (19) holds, we can rewrite the mutual coherence between any two points from different subspaces to reveal its dependence on higher-order principal angles. In particular, we show (in Section 7.2) that the coherence between the residual used in Alg. 1 to select the next point to be included in the representation of a point , and a point in is upper bounded by

(20)

where is the bounding constant of the data and is the -norm of the cross-spectra or equivalently, the trace norm of . Using the bound in (20), we arrive at the following sufficient condition for EFS from uniformly bounded unions of subspaces.

Theorem 3

Let be a uniformly bounded union of subspaces as defined in (19), where , and . Let denote the cross-spectra of the subspaces and . A sufficient condition for EFS to occur for all of the points in is that the covering diameter

This condition requires that both the covering diameter of each subspace and the bounding constant of the union be sufficiently small in order to guarantee EFS. One way to guarantee that the ensemble has a small bounding constant is to constrain the total amount of energy that points in have in the -dimensional subspace spanned by the principal vectors in .

Our analysis for bounded unions assumes that the nonzero entries of the cross-spectra are equal, and thus each pair of supporting principal vectors in are equally important in determining whether points in will admit EFS. However, this assumption is not true in general. When the union is supported by principal vectors with non-uniform principal angles, our analysis suggests that a weaker form of incoherence is required. Instead of requiring incoherence with all principal vectors, the data must be sufficiently incoherent with the principal vectors that correspond to small principal angles (or large values of the cross-spectra). This means that as long as points are not concentrated along the principal directions with small principal angles (i.e., intersections), then EFS can be guaranteed, even when subspaces exhibit non-trivial intersections. To test this prediction, we will study a bounded energy model for the unions of subspaces in Section 5.2

. We show that when the dataset is sparsely sampled (larger covering radius), reducing the amount of energy that points contain in the intersections between two subspaces, increases the probability that points admit EFS dramatically.

Finally, our analysis of bounded unions suggests that the decay of the cross-spectra is likely to play an important role in determining whether points will admit EFS or not. To test this hypothesis, we will study the role that the structure of the cross-spectra plays in EFS in Section 5.3.

5 Experimental Results

In our theoretical analysis of EFS in Sections 3 and 4, we revealed an intimate connection between the covering radius of subspaces and the principal angles between pairs of subspaces in the ensemble. In this section, we will conduct an empirical study to explore these connections further. In particular, we will study the probability of EFS as we vary the covering radius as well as the dimension of the intersection and/or overlap between subspaces. In addition, we will study the role that the structure of the cross-spectra and the amount of energy that points have in subspace intersections, have on EFS.

5.1 Generative Model for Synthetic Data

In order to study EFS for unions of subspaces with varied cross-spectra, we will generate synthetic data from unions of overlapping block-sparse signals.

5.1.1 Constructing Sub-dictionaries

We construct a pair of sub-dictionaries as follows: Take two subsets and of atoms from a dictionary containing atoms in its columns, where and . Let denote the subset of atoms indexed by and let denote the subset of atoms indexed by . Our goal is to select and such that is diagonal, i.e., if , where is the element in and is the element of . In this case, the cross-spectra is defined as , where . For each union, we fix the “overlap” or the rank of to a constant between zero (orthogonal subspaces) and (maximal overlap).

To generate a pair of -dimensional subspaces with a -dimensional overlap, we can pair the elements from and such that the entry of the cross-spectra equals

We can leverage the banded structure of shift-invariant dictionaries, e.g., dictionary matrices with localized Toeplitz structure, to generate subspaces with structured cross-spectra as follows.666While shift-invariant dictionaries appear in a wide range of applications of sparse recovery (Mailhè et al., 2008; Dyer et al., 2010), we introduce the idea of using shift-invariant dictionaries to create structured unions of subspaces for the first time here. First, we fix a set of incoherent (orthogonal) atoms from our shift-invariant dictionary, which we place in the columns of . Now, holding fixed, we set the atom of the second sub-dictionary to be a shifted version of the atom of the dictionary . To be precise, if we set , where is the atom in our shift-invariant dictionary, then we will set for a particular shift . By varying the shift , we can control the coherence between and . In Fig. 4, we show an example of one such construction for . Since , the worst-case pair of subspaces with overlap equal to is obtained when we pair identical atoms with orthogonal atoms. In this case, the cross-spectra attains its maximum over its entire support and equals zero otherwise. For such unions, the overlap equals the dimension of the intersection between the subspaces. We will refer to this class of block-sparse signals as orthoblock sparse signals.


Figure 4: Generating unions of subspaces from shift-invariant dictionaries. An example of a collection of two sub-dictionaries of five atoms, where each of the atoms have a non-zero inner product with one other atom. This choice of sub-dictionaries produces a union of disjoint subspaces, where the overlap ratio .

5.1.2 Coefficient Synthesis

To synthesize a point that lives in the span of the sub-dictionary , we combine the elements and subspace coefficients linearly to form

where is the subspace coefficient associated with the column in . Without loss of generality, we will assume that the elements in are sorted such that the values of the cross-spectra are monotonically decreasing. Let be the “common component” of that lies in the space spanned by the principal directions between the pair of subspaces that correspond to non-orthogonal principal angles between and let denote the “disjoint component” of that lies in the space orthogonal to the space spanned by the first principal directions.

For our experiments, we consider points drawn from one of the two following coefficient distributions, which we will refer to as (M1) and (M2) respectively.

  • (M1) Uniformly Distributed on the Sphere:

     Generate subspace coefficients according to a standard normal distribution and map the point to the unit sphere

  • (M2) Bounded Energy Model:  Generate subspace coefficients according to (M1) and rescale each coefficient in order to bound the energy in the common component

By simply restricting the total energy that each point has in its common component, the bounded energy model (M2) can be used to produce ensembles with small bounding constant to test the predictions in Thm. 3.

5.2 Phase Transitions for OMP

The goal of our first experiment is to study the probability of EFS—the probability that a point in the ensemble admits exact features—as we vary both the number and distribution of points in each subspace as well as the dimension of the intersection between subspaces. For this set of experiments, we generate a union of orthoblock sparse signals, where the overlap equals the dimension of the intersection.


Figure 5: Probability of EFS for different coefficient distributions. The probability of EFS for a union of two subspaces of dimension (left column) and (right column). The probability of EFS is displayed as a function of the overlap ratio and the logarithm of the oversampling ratio (top row) and the mutual energy (bottom row) .

Along the top row of Fig. 5, we display the probability of EFS for orthoblock sparse signals generated according to the coefficient model (M1): the probability of EFS is computed as we vary the overlap ratio in conjunction with the oversampling ratio , where equals the dimension of the intersection between the subspaces, and is the number of points per subspace. Along the bottom row of Fig. 5, we display the probability of EFS for orthoblock sparse signals generated according to the coefficient model (M2): the probability of EFS is computed as we vary the overlap ratio and the amount of energy each point has within its common component. For these experiments, the subspace dimension is set to (left) and (right). To see the phase boundary that arises when we approach critical sampling (i.e., ), we display our results in terms of the logarithm of the oversampling ratio. For these experiments, the results are averaged over trials.

As our theory predicts, the oversampling ratio has a strong impact on the degree of overlap between subspaces that can be tolerated before EFS no longer occurs. In particular, as the number of points in each subspace increases (covering radius decreases), the probability of EFS obeys a second-order phase transition, i.e., there is a graceful degradation in the probability of EFS as the dimension of the intersection increases. When the pair of subspaces are densely sampled, the phase boundary is shifted all the way to

, where of the dimensions of each subspace intersect. This is due to the fact that as each subspace is sampled more densely, the covering radius becomes sufficiently small to ensure that even when the overlap between planes is high, EFS still occurs with high probability. In contrast, when the subspaces are critically sampled, i.e., the number of points per subspace , only a small amount of overlap can be tolerated, where . In addition to shifting the phase boundary, as the oversampling ratio increases, the width of the transition region (where the probability of EFS goes from zero to one) also increases.

Along the bottom row of Fig. 5, we study the impact of the bounding constant on EFS, as discussed in Section 4.2. In this experiment, we fix the oversampling ratio to and vary the common energy in conjunction with the overlap ratio . By reducing the bounding constant of the union, the phase boundary for the uniformly distributed data from model (M1) is shifted from to for both and . This result confirms our predictions in the discussion of Thm. 3 that by reducing the amount of energy that points have in their subspace intersections EFS will occur for higher degrees of overlap. Another interesting finding of this experiment is that, once reaches a threshold, the phase boundary remains constant and further reducing the bounding constant has no impact on the phase transitions for EFS.

5.3 Comparison of OMP and NN

In this section, we compare the probability of EFS for feature selection with OMP and nearest neighbors (NN). First, we compare the performance of both feature selection methods for unions with different cross-spectra. Second, we compare the phase transitions for unions of orthoblock sparse signals as we vary the overlap and oversampling ratio.


Figure 6: Probability of EFS for unions with structured cross-spectra. Along the top row, we show the cross-spectra for different unions of block-sparse signals. Along the bottom row, we show the probability of EFS as we vary the overlap ratio for OMP (solid) and NN (dash).

For our experiments, we generate pairs of subspaces with structured cross-spectra as described in Section 5.1.1. The cross-spectra arising from three different unions of block-sparse signals are displayed along the top row of Fig. 6. On the left, we show the cross-spectra for a union of orthoblock sparse signals with overlap ratio , where and . The cross-spectra obtained by pairing shifted Lorentzian and exponential atoms are displayed in the middle and right columns, respectively. Along the bottom row of Fig. 6, we show the probability of EFS for OMP and NN for each of these three subspace unions as we vary the overlap . To do this, we generate subspaces by setting their cross-spectra equal to the first entries equal to the cross-spectra in Fig. 6 and setting the remaining entries of the cross-spectra equal to zero. Each subspace cluster is generated by sampling points from each subspace according to the coefficient model (M1).


Figure 7: Phase transitions for OMP and NN. The probability of EFS for orthoblock sparse signals for OMP (a) and NN (b) feature sets as a function of the oversampling ratio and the overlap ratio , where .

This study provides a number of interesting insights into the role that higher-order principal angles between subspaces play in feature selection for both sparse recovery methods and NN. First, we observe that the gap between the probability of EFS for OMP and NN is markedly different for each of the three unions. In the first union of orthoblock sparse signals, the probability of EFS for OMP lies strictly above that obtained for the NN method, but the gap between the performance of both methods is relatively small. In the second union, both methods maintain a high probability of EFS, with OMP admitting nearly perfect feature sets even when the overlap ratio is maximal. In the third union, we observe that the gap between EFS for OMP and NN is most pronounced. In this case, the probability of EFS for NN sets decreases to , while OMP admits a very high probability of EFS, even when the overlap ratio is maximal. in summary, we observe that when data is distributed uniformly with respect to all of the principal directions between a pair of subspaces and the cross-spectra is sub-linear, then EFS may be guaranteed with high probability for all points in the set provided the sampling density is sufficiently high. This is in agreement with the discussion of EFS bounded unions in Section 4.2. Moreover, these results further support our claims that in order to truly understand and predict the behavior of endogenous sparse recovery from unions of subspaces, we require a description that relies on the entire cross-spectra.

In Fig. 7, we display the probability of EFS for OMP (left) and sets of NN (right) as we vary the overlap and the oversampling ratio. For this experiment, we consider unions of orthoblock sparse signals living on subspaces of dimension and vary and . An interesting result of this study is that there are regimes where the probability of EFS equals zero for NN but occurs for OMP with a non-trivial probability. In particular, we observe that when the sampling of each subspace is sparse (the oversampling ratio is low), the gap between OMP and NN increases and OMP significantly outperforms NN in terms of their probability of EFS. Our study of EFS for structured cross-spectra suggests that the gap between NN and OMP should be even more pronounced for cross-spectra with superlinear decay.

5.4 Face Illumination Subspaces

In this section, we compare the performance of sparse recovery methods, i.e., BP and OMP, with NN for clustering unions of illumination subspaces arising from a collection of images of faces under different lighting conditions. By fixing the camera center and position of the persons face and capturing multiple images under different lighting conditions, the resulting images can be well-approximated by a -dimensional subspace (Ramamoorthi, 2002).

In Fig. 2, we show three examples of the subspace affinity matrices obtained with NN, BP, and OMP for two different faces under different illumination conditions from the Yale Database B (Georghiades et al., 2001), where each image has been subsampled to pixels, with . In all of the examples, the data is sorted such that the images for each face are placed in a contiguous block.

To generate the NN affinity matrices in the left column of Fig. 2, we compute the absolute normalized inner products between all points in the dataset and then threshold each row to select the nearest neighbors to each point. To generate the OMP affinity matrices in the right column, we compute the sparse representations of each point in the dataset with Alg. 1 for and stack the resulting coefficient vectors into the rows of a matrix ; the final subspace affinity is computed by symmetrizing the coefficient matrix, . To generate the BP affinity matrices in the middle column, we solved the BP denoising (BPDN) problem in (4) via a homotopy algorithm where we sweep over the noise parameter and choose the smallest value of that produces coefficients.777We also studied another variant of BPDN where we solve OMP for , compute the error of the resulting approximation, and then use this error as the noise parameter . However, this variant provided worse results than those reported in Table 1. The resulting coefficient vectors are then stacked into the rows of a matrix and the final subspace affinity is computed by symmetrizing the coefficient matrix, .

After computing the subspace affinity matrix for each of these three feature selection methods, we employ a spectral clustering approach which partitions the data based upon the eigenvector corresponding to the smallest nonzero eigenvalue of the graph Laplacian of the affinity matrix

(Shi and Malik, 2000; Ng et al., 2002). For all three feature selection methods, we obtain the best clustering performance when we cluster the data based upon the graph Laplacian instead of the normalized graph Laplacian (Shi and Malik, 2000). In Table 1, we display the percentage of points that resulted in EFS and the final clustering error for all pairs of subspaces in the Yale B database. Along the top row, we display the mean and median percentage of points that resulted in EFS for the full dataset (all illumination conditions), half of the dataset ( illumination conditions selected at random in each trial), and a quarter of the dataset (

illumination conditions selected at random in each trial). Along the bottom row, we display the clustering error (percentage of points that were incorrectly classified) for all three methods.

While both sparse recovery methods (BPDN and OMP) admit EFS rates that are comparable to NN on the full dataset, we find that sparse recovery methods provide higher rates of EFS than NN when the sampling of each subspace is sparse, i.e., the half and quarter datasets. These results are also in agreement with our experiments on synthetic data. A surprising result is that OMP provides better clustering performance than BP on this particular dataset, even though OMP has lower rates of EFS.


Table 1: Classification and EFS rates for illumination subspaces. Shown are the aggregate results obtained over pairs of subspaces.

6 Discussion

In this section, we provide insight into the implications of our results for different applications of sparse recovery and compressive sensing. Following this, we end with some open questions and directions for future research.

6.1 “Data Driven” Sparse Approximation

The standard paradigm in signal processing and approximation theory is to compute a representation of a signal in a fixed and pre-specified basis or overcomplete dictionary. In most cases, the dictionaries used to form these representations are designed according to some mathematical desiderata. A more recent approach has been to learn a dictionary from a collection of data, such that the data admit a sparse representation with respect to the learned dictionary (Olshausen and Field, 1997; Aharon et al., 2006).

The applicability and utility of endogenous sparse recovery in subspace learning draws into question whether we can use endogenous sparse recovery for other tasks, including approximation and compression. The question that naturally arises is, “do we design a dictionary, learn a dictionary, or use the data as a dictionary?” Understanding the advantages and tradeoffs between each of these approaches is an interesting and open question.

6.2 Learning Block-Sparse Signal Models

Block-sparse signals and other structured sparse signals have received a great deal of attention over the past few years, especially in the context of compressive sensing from structured unions of subspaces (Lu and Do, 2008; Blumensath and Davies, 2009) and in model-based compressive sensing (Baraniuk et al., 2010). In all of these settings, the fact that a class or collection of signals admit structured support patterns is leveraged in order to obtain improved recovery of sparse signals in noise and in the presence of undersampling.

To exploit such structure in sparse signals—especially in situations where the structure of signals or blocks of active atoms may be changing across different instances in time, space, etc.—the underlying subspaces that the signals occupy must be learned directly from the data. The methods that we have described for learning union of subspaces from ensembles of data can be utilized in the context of learning block sparse and other structured sparse signal models. The application of subspace clustering methods for this purpose is an interesting direction for future research.

6.3 Beyond Coherence

While the maximum and cumulative coherence (Tropp, 2004) provide measures of the uniqueness of sub-dictionaries that are necessary to guarantee exact signal recovery for sparse recovery methods, our current study suggests that examining the principal angles formed from pairs of sub-dictionaries could provide an even richer description of the geometric properties of a dictionary. Thus, a study of the principal angles formed by different subsets of atoms from a dictionary might provide new insights into the performance of sparse recovery methods with coherent dictionaries and for compressive sensing from structured matrices. In addition, our empirical results in Section 5.3 suggest that there might exist an intrinsic difference between sparse recovery from dictionaries that exhibit sublinear versus superlinear decay in their principal angles or cross-spectra. It would be interesting to explore whether these two “classes” of dictionaries exhibit different phase transitions for sparse recovery.

6.4 Discriminative Dictionary Learning

While dictionary learning was originally proposed for learning dictionaries that admit sparse representations of a collection of signals (Olshausen and Field, 1997; Aharon et al., 2006), dictionary learning has recently been employed for classification. To use learned dictionaries for classification, a dictionary is learned for each class of training signals and then a sparse representation of a test signal is formed with respect to each of the learned dictionaries. The idea is that the test signal will admit a more compact representation with respect to the dictionary that was learned from the class of signals that the test signal belongs to.

Instead of learning these dictionaries independently of one another, discriminative dictionary learning (Mairal et al., 2008; Ramirez et al., 2010), aims to learn a collection of dictionaries that are incoherent from one another. This is accomplished by minimizing either the spectral or Frobenius norm of the matrix product between pairs of dictionaries. This same approach is utilized in (Mailhè et al., 2012) to learn sensing matrices for CS that are incoherent with a learned dictionary.

There are a number of interesting connections between discriminative dictionary learning and our current study of EFS from collections of unions of subspaces. In particular, our study provides new insights into the role that the principal angles between two dictionaries tell us about our ability to separate classes of data based upon their sparse representations. Our study of EFS from unions with structured cross-spectra suggests that the decay of the cross-spectra between different data classes provides a powerful predictor of the performance of sparse recovery methods from data living on a union of low-dimensional subspaces. This suggests that in the context of discriminative dictionary learning, it might be more advantageous to reduce the -norm of the cross-spectra rather than simply minimizing the maximum coherence and/or Frobenius norm between points in different subspaces as in (Mairal et al., 2008) and (Ramirez et al., 2010) respectively. To do this, each class of data must first be embedded within a subspace, a ONB is formed for each subspace, and then the - norm of the cross-spectra must be minimized. An interesting question is how one might impose such a constraint in discriminative dictionary learning methods.

6.5 Open Questions and Future Work

While EFS provides a natural measure of how well a feature selection algorithm will perform for the task of subspace clustering, our empirical results suggest that EFS does not necessarily predict the performance of spectral clustering methods when applied to the resulting subspace affinity matrices. In particular, we find that while OMP obtains lower rates of EFS than BP on real-world data, OMP yields better clustering results on the same dataset. Understanding where this difference in performance might arise from is an interesting direction for future research.

Another interesting finding of our empirical study is that the gap between the rates of EFS for sparse recovery methods and NN depends on the sampling density of each subspace. In particular, we found that for dense samplings of each subspace, the performance of NN is comparable to sparse recovery methods; however, when each subspace is more sparsely sampled, sparse recovery methods provide significant gains over NN methods. This result suggests that endogenous sparse recovery provides a powerful strategy for clustering when the sampling of subspace clusters is sparse. Analyzing the gap between sparse recovery methods and NN methods for feature selection is an interesting direction for future research.

Other directions for future research include: extending our deterministic analysis to random and semi-random settings such as those provided in (Soltanolkotabi and Candès, 2012) and studying the performance of OMP on noisy or corrupted data living on unions of subspaces.

7 Proofs

7.1 Proof of Theorem 1

Our goal is to prove that, if (11) holds, then it is sufficient to guarantee that EFS occurs for every point in when OMP is used for feature selection. We will prove this by induction.

Consider the greedy selection step in OMP (see Alg. 1) for a point which belongs to the subspace cluster . Recall that at the step of OMP, the point that is maximally correlated with the signal residual will be selected to be included in the feature set . The normalized residual at the step is computed as

(21)

where is a projector onto the subspace spanned by the points in the current feature set , where .

To guarantee that we select a point from , we require that the following greedy selection criterion holds:

(22)

We will prove that this selection criterion holds at each step of OMP by developing an upper bound on the RHS (the maximum inner product between the residual and a point outside of ) and a lower bound on the LHS (the minimum inner product between the residual and a point in ).

First, we will develop the upper bound on the RHS. In the first iteration, the residual is set to the signal of interest (). In this case, we can bound the RHS by the mutual coherence across all other sets

Now assume that at the iteration we have selected points from the correct subspace cluster. This implies that our signal residual still lies within the span of , and thus we can write the residual , where is the closest point to in and is the remaining portion of the residual which also lies in . Thus, we can bound the RHS as follows

Using the fact that , we can bound the -norm of the vector as

Plugging this quantity into our expression for the RHS, we arrive at the following upper bound

where the final simplification comes from invoking the following Lemma.

Lemma 1

For   ,

Proof of Lemma 1: We wish to develop an upper bound on the function

Thus our goal is to identify a function , where for , and . The derivative of can be upper bounded easily as follows

Thus, and ; this ensures that for , and . By the Fundamental Theorem of Integral Calculus, provides an upper bound for over the domain of interest where, To obtain the final result, take the square root of both sides,

Second, we will develop the lower bound on the LHS of the greedy selection criterion. To ensure that we select a point from at the first iteration, we require that ’s nearest neighbor belongs to the same subspace cluster. Let denote the nearest neighbor to

If and both lie in , then the first point selected via OMP will result in EFS.

Let us assume that the points in admit an -covering of the subspace cluster , or that . In this case, we have the following bound in effect