A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing

02/26/2019 ∙ by Sirisha Rambhatla, et al. ∙ University of Minnesota Princeton University 0

We consider the task of localizing targets of interest in a hyperspectral (HS) image based on their spectral signature(s), by posing the problem as two distinct convex demixing task(s). With applications ranging from remote sensing to surveillance, this task of target detection leverages the fact that each material/object possesses its own characteristic spectral response, depending upon its composition. However, since signatures of different materials are often correlated, matched filtering-based approaches may not be apply here. To this end, we model a HS image as a superposition of a low-rank component and a dictionary sparse component, wherein the dictionary consists of the a priori known characteristic spectral responses of the target we wish to localize, and develop techniques for two different sparsity structures, resulting from different model assumptions. We also present the corresponding recovery guarantees, leveraging our recent theoretical results from a companion paper. Finally, we analyze the performance of the proposed approach via experimental evaluations on real HS datasets for a classification task, and compare its performance with related techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Hyperspectral (HS) imaging is an imaging modality which senses the intensities of the reflected electromagnetic waves (responses) corresponding to different wavelengths of the electromagnetic spectra, often invisible to the human eye. As the spectral response associated with an object/material is dependent on its composition, HS imaging lends itself very useful in identifying the said target objects/materials via their characteristic spectra or signature responses, also referred to as endmembers in the literature. Typical applications of HS imaging range from monitoring agricultural use of land, catchment areas of rivers and water bodies, food processing and surveillance, to detecting various minerals, chemicals, and even presence of life sustaining compounds on distant planets; see [1, 2], and references therein for details. However, often, these spectral signatures are highly correlated, making it difficult to detect regions of interest based on these endmembers. In this work, we present two techniques to localize target materials/objects in a given HS image based on some structural assumptions on the data, using the a priori known signatures of the target of interest.

The primary property that enables us to localize a target is the approximate low-rankness of HS images when represented as a matrix, owing to the fact that a particular scene is composed of only a limited type of objects/materials [3]. For instance, while imaging an agricultural area, one would expect to record responses from materials like biomass, farm vehicles, roads, houses, water bodies, and so on. Moreover, the spectra of complex materials can be assumed to be a linear mixture of the constituent materials [3, 4], i.e. the received HS responses can be viewed as being generated by a linear mixture model [5]. For the target localization task at hand, this approximate low-rank structure is used to decompose a given HS image into a low-rank part, and a component that is sparse in a known dictionary – a dictionary sparse part– wherein the dictionary is composed of the spectral signatures of the target of interest. We begin by formalizing the specific model of interest in the next section.

I-a Model

A HS sensor records the response of a region, which corresponds to a pixel in the HS image as shown in Fig. 1, to different frequencies of the electromagnetic spectrum. As a result, each HS image , can be viewed as a data-cube formed by stacking matrices of size , as shown in Fig. 1. Therefore, each volumetric element or voxel

, of a HS image is a vector of length

, and represents the response of the material to measurement channels. Here, is determined by the number of channels or frequency bands across which measurements of the reflectances are made.

Fig. 1: The HS image data-cube corresponding to the Indian Pines dataset.

Formally, let be formed by unfolding the HS image , such that, each column of corresponds to a voxel of the data-cube. We then model as arising from a superposition of a low-rank component with rank , and a dictionary-sparse component, expressed as , i.e.,

(1)

Here, represents an a priori known dictionary composed of appropriately normalized characteristic responses of the material/object (or the constituents of the material), we wish to localize, and refers to the sparse coefficient matrix (also referred to as abundances in the literature). Note that can also be constructed by learning a dictionary based on the known spectral signatures of a target; see [6, 7, 8, 9].

I-B Our Contributions

In this work, we present two techniques111The code is made available at github.com/srambhatla/Diction ary-based-Robust-PCA. for target detection in a HS image, depending upon different sparsity assumptions on the matrix

, by modeling the data as shown in (1). Building on the theoretical results of [10, 11, 12], our techniques operate by forming the dictionary

using the a priori known spectral signatures of the target of interest, and leveraging the approximate low-rank structure of the data matrix

[13]. Here, the dictionary

can be formed from the a priori known signatures directly, or by learning an appropriate dictionary based on target data; see [6, 7, 8, 9].

Fig. 2: Correlated spectral signatures. The spectral signatures of even different materials are highly correlated. Shown here are spectral signatures of classes from the Indian Pines dataset [14]. Here, the shaded region shows the lower and upper ranges of reflectance values the signatures take.

We consider two types of sparsity structures for the coefficient matrix

, namely, a) global or entry-wise sparsity, wherein we let the matrix

have

non-zero entries globally, and b) column-wise sparse structure, where at most

columns of the matrix

have non-zero elements. The choice of a particular sparsity model depends on the properties of the dictionary matrix

. In particular, if the target signature admits a sparse representation in the dictionary, entry-wise sparsity structure is preferred. This is likely to be the case when the dictionary is overcomplete (

) or fat, and also when the target spectral responses admit a sparse representation in the dictionary. On the other hand, the column-wise sparsity structure is amenable to cases where the representation can use all columns of the dictionary. This potentially arises in the cases when the dictionary is undercomplete (

) or thin. Note that, in the column-wise sparsity case, the non-zero columns need not be sparse themselves. The applicability of these two modalities is also exhibited in our experimental analysis; see Section V for further details. Further, we specialize the theoretical results of [12], to present the conditions under which such a demixing task will succeed under the two sparsity models discussed above; see also [10] and [11].

Next, we analyze the performance of the proposed techniques via extensive experimental evaluations on real-world demixing tasks over different datasets and dictionary choices, and compare the performance of the proposed techniques with related works. This demixing task is particularly challenging since the spectral signatures of even distinct classes are highly correlated to each other, as shown in Fig. 2. The shaded region here shows the upper and lower ranges of different classes. For instance, in Fig. 2 we observe that the spectral signature of the “Stone-Steel” class is similar to that of class “Wheat”. This correlation between the spectral signatures of different classes results in an approximate low-rank structure of the data, captured by the low-rank component

, while the dictionary-sparse component

is used to identify the target of interest. We specifically show that such a decomposition successfully localizes the target despite the high correlation between spectral signatures of distinct classes.

Finally, it is worth noting that although we consider thin dictionaries (

) for the purposes of this work, since it is more suitable for the current exposition, our theoretical results are also applicable for the fat case (

); see [10],[11], and [12] for further details.

I-C Prior Art

The model shown in (1) is closely related to a number of well-known problems. To start, in the absence of the dictionary sparse part

, (1

) reduces to the popular problem of principal component analysis (PCA)

[15, 16]. The problem considered here also shares its structure with variants of PCA, such as robust-PCA [17, 18] (with

for an identity matrix

,) outlier pursuit

[19] (where

and

is column-wise sparse,) and others [20, 21, 22, 23, 24, 25, 26, 27, 28].

On the other hand, the problem can be identified as that of sparse recovery [29, 30, 31, 32], in the absence of the low-rank part

. Following which, sparse recovery methods for analysis of HS images have been explored in [33, 34, 35, 36]. In addition, in a recent work [37], the authors further impose a low-rank constraint on the coefficient matrix

for the demixing task. Further, applications of compressive sampling have been explored in [38], while [5] analyzes the case where HS images are noisy and incomplete. The techniques discussed above focus on identifying all materials in a given HS image. However, for target localization tasks, it is of interest to identify only specific target(s) in a given HS image. As a result, there is a need for techniques which localize targets based on their a priori known spectral signatures.

The model described in (1) was introduced in [39] as a means to detect traffic anomalies in a network, wherein, the authors focus on a case where the dictionary

is overcomplete, i.e., fat, and the rows of

are orthogonal, e.g., . Here, the coefficient matrix

is assumed to possess at most nonzero elements per row and column, and nonzero elements globally. In a recent work [10] and the accompanying theoretical work [12], we analyze the extension of [39] to include a case where the dictionary has more rows than columns, i.e., is thin, while removing the orthogonality constraint for both the thin and the fat dictionary cases, when is small. This case is particularly amenable for the target localization task at hand, since often we aim to localize targets based on a few a priori known spectral signatures. To this end, we focus our attention on the thin case, although a similar analysis applies for the fat case [10]; see also [12].

I-D Related Techniques

To study the properties of our techniques, we compare and contrast their performance with related works. First, as a sanity check, we compare the performance of the proposed techniques with matched filtering-based methods (detailed in Section V). In addition, we compare the performance of our techniques to other closely related methods based on the sparsity assumptions on the matrix

, as described below.

For entry-wise sparse structure: The first method we compare to is based on the observation that in cases where the known dictionary is thin, we can multiply (1) on the left by the pseudo-inverse of , say , in which case, the model shown in (1) reduces to that of robust PCA, i.e.,

(RPCA)

where and . Therefore, in this case, we can recover the sparse matrix by robust PCA [17, 18]

, and estimate the low-rank part using the estimate of

. Note that this is not applicable for the fat case due to the non-trivial null space of its pseudo-inverse.

Although at a first glance this seems like a reasonable technique, somewhat surprisingly, it does not succeed for all thin dictionaries. Specifically, in cases where

, the rank of

, is greater than the number of dictionary elements

, the pseudo-inversed component

is no longer “low-rank.” In fact, since the notion of low-rankness is relative to the potential maximum rank of the component,

can be close to full-rank. As a result, the robust PCA model shown in RPCA is no longer applicable and the demixing task may not succeed; see our corresponding theoretical paper [12] for details.

Moreover, even in cases where RPCA succeeds (

), our proposed one-shot procedure guarantees the recovery of the two components under some mild conditions, while the pseudo-inverse based procedure RPCA will require a two-step procedure – one to recover the sparse coefficient matrix and other to recover the low-rank component – in addition to a non-trivial analysis of the interaction between and the low-rank part

. This is also apparent from our experiments shown in Section V, which indicate that optimization based on the model in (1) is more robust as compared to RPCA for the classification problem at hand across different choices of the dictionaries.

For column-wise sparse structure: The column-wise sparse structure of the matrix

results in a column-wise sparse structure of the dictionary-sparse component

. As a result, the model at hand is similar to that studied in OP [19]. Specifically, the OP technique is aimed at identifying the outlier columns in a given matrix. However, it fails in cases where the target of interest is not an outlier, as in case of HS data. On the other hand, since the proposed technique uses the dictionary

corresponding to the spectral signatures of the target of interest to guide the demixing procedure, it results in a spectral signature-driven technique for target localization. This distinction between the two procedures is also discussed in our corresponding theoretical work [12] Section V, and is exemplified by our experimental results shown in Section V.

Further, as in the entry-wise case, one can also envision a pseudo-inverse based procedure to identify the target of interest via OP [19] on the pseudo-inversed data (referred to as OP in our discussion) i.e.,

(OP)

where and , with

admitting a column-wise sparse structure. However, this variant of OP does not succeed when the rank of the low-rank component is greater than the number of dictionary elements, i.e.,

, as in the previous case; see our related theoretical work for details [12] Section V.

The rest of the paper is organized as follows. We formulate the problem and introduce relevant theoretical quantities in Section II, followed by specializing the theoretical results for the current application in Section III. Next, in Section IV, we present the specifics of the algorithms for the two cases. In Section V, we describe the experimental set-up and demonstrate the applicability of the proposed approaches via extensive numerical simulations on real HS datasets for a classification task. Finally, we conclude this discussion in Section VI.

Notation: Given a matrix

,

denotes its -th column and

denotes the

element of

. We use

for the spectral norm, where

denotes the maximum singular value of the matrix,

,

, and

, where

denotes the canonical basis vector with

at the

-th location and

elsewhere. Further, we denote the

-norm of a vector

as

. In addition,

,

, and

refer to the nuclear norm, entry-wise

- norm, and

norm (sum of the

norms of the columns) of the matrix, respectively, which serve as convex relaxations of rank, sparsity, and column-wise sparsity inducing optimization, respectively.

Ii Problem Formulation

In this section, we introduce the optimization problem of interest and different theoretical quantities pertinent to our analysis.

Ii-a Optimization problems

Our aim is to recover the low-rank component

and the sparse coefficient matrix

, given the dictionary

and samples generated according to the model shown in (1). Here the coefficient matrix

can either have an entry-wise sparse structure or a column-wise sparse structure. We now crystallize our model assumptions to formulate appropriate convex optimization problems for the two sparsity structures.

Specifically, depending upon the priors about the sparsity structure of

, and the low-rank property of the component

, we aim to solve the following convex optimization problems, i.e.,

(D-RPCA(E))

for the entry-wise sparsity case, and

(D-RPCA(C))

for the column-wise sparse case, to recover

and

with regularization parameters

and

, given the data

and the dictionary

. Here, the a priori known dictionary

is assumed to be undercomplete (thin, i.e.,

) for the application at hand. Analysis of a more general case can be found in[12]. Further, here “D-RPCA” refers to “dictionary based robust principal component analysis”, while the qualifiers “E” and “C” indicate the entry-wise and column-wise sparsity patterns, respectively.

Note that, in the column-wise sparse case there is an inherent ambiguity regarding the recovery of the true component pairs

corresponding to the low-rank part and the dictionary sparse component, respectively. Specifically, any pair

satisfying

, where

and

have the same column space, and

and

have the identical column support, is a solution of D-RPCA(C). To this end, we define the following oracle model to characterize the optimality of any solution pair

.

Definition D.1 (Oracle Model for Column-wise Sparse Case).

Let the pair

be the matrices forming the data

as per (1), define the corresponding oracle model

. Then, any pair

is in the Oracle Model

, if

,

and

hold simultaneously, where

and

are projections onto the column space

of

and column support

of

, respectively.

For this case, we then first establish the sufficient conditions for the existence of a solution based on some incoherence conditions. Following which, our main result for the column-wise case states the sufficient conditions under which solving a convex optimization problem recovers a solution pair

in the oracle model.

Ii-B Conditions on the Dictionary

For our analysis, we require that the dictionary

follows the generalized frame property (GFP) defined as follows.

Definition D.2.

A matrix

satisfies the generalized frame property (GFP), on vectors

, if for any fixed vector

where

, we have

where

and

are the lower and upper generalized frame bounds with

.

The GFP is met as long as the vector

is not in the null-space of the matrix

, and

is bounded. Therefore, for the thin dictionary setting

for both entry-wise and column-wise sparsity cases, this condition is satisfied as long as

has a full column rank, and

can be the entire space. For example,

being a frame[40] suffices; see [41] for a brief overview of frames.

Ii-C Relevant Subspaces

Before we define the relevant subspaces for this discussion, we define a few preliminaries. First, let the pair

be the solution to D-RPCA(E) (the entry-wise sparse case), and for the column-wise sparse case, let the pair

be in the oracle model

; see Definition D.1.

Next, for the low-rank matrix

, let the compact singular value decomposition (SVD) be represented as

where

and

are the left and right singular vectors of

, respectively, and

is a diagonal matrix with singular values arranged in a descending order on the diagonal. Here, matrices

and

each have orthogonal columns. Further, let

be the linear subspace consisting of matrices spanning the same row or column space as

, i.e.,

Next, let

(

) be the space spanned by

matrices with the same non-zero support (column support, denoted as csupp) as

, and let

be defined as

Here,

denotes the index set containing the non-zero column indices of

for the column-wise sparsity case. In addition, we denote the corresponding complements of the spaces described above by appending ‘’.

We use calligraphic ‘

’ to denote the projection operator onto a subspace defined by the subscript, and ‘

’ to denote the corresponding projection matrix with the appropriate subscripts. Therefore, using these definitions the projection operators onto and orthogonal to the subspace

are defined as

and

respectively.

Ii-D Incoherence Measures

We also employ various notions of incoherence to identify the conditions under which our procedures succeed. To this end, we first define the incoherence parameter

that characterizes the relationship between the low-rank part

and the dictionary sparse part

, as

(2)

The parameter

is the measure of degree of similarity between the low-rank part and the dictionary sparse component. Here, a larger

implies that the dictionary sparse component is close to the low-rank part. In addition, we also define the parameter

as

(3)

which measures the similarity between the orthogonal complement of the column-space

and the dictionary

.

The next two measures of incoherence can be interpreted as a way to identify the cases where for

with SVD as

: (a)

resembles the dictionary

, and (b)

resembles the sparse coefficient matrix

. In these cases, the low-rank part may resemble the dictionary sparse component. To this end, similar to [39], we define the following measures to identify these cases as

(4)

Here,

achieves the upper bound when a dictionary element is exactly aligned with the column space

of the

, and lower bound when all of the dictionary elements are orthogonal to

. Moreover,

achieves the upper bound when the row-space of

is “spiky”, i.e., a certain row of

is

-sparse, meaning that a column of

is supported by (can be expressed as a linear combination of) a column of

. The lower bound here is attained when it is “spread-out”, i.e., each column of

is a linear combination of all columns of

. In general, our recovery of the two components is easier when the incoherence parameters

and

are closer to their lower bounds. In addition, for notational convenience, we define constants

(5)

Here,

is the maximum absolute entry of

, which measures how close columns of

are to the singular vectors of . Similarly, for the column-wise case,

measures the closeness of columns of

to the singular vectors of under a different metric (column-wise maximal

-norm).

Iii Theoretical Results

In this section, we specialize our theoretical results [12] for the HS demixing task. Specifically, we provide the main results corresponding to each sparsity structure of for the thin dictionary case considered here. We start with the theoretical results for the entry-wise sparsity case, and then present the corresponding theoretical guarantees for the column-wise sparsity structure; see [12] for detailed proofs.

Iii-a Exact Recovery for Entry-wise Sparsity Case

For the entry-wise case, our main result establishes the existence of a regularization parameter

, for which solving the optimization problem D-RPCA(E) will recover the components

and

exactly. To this end, we will show that such a

belongs to a non-empty interval

, where

and

are defined as

(6)

Here,

where

is a constant that captures the relationship between different model parameters, and is defined as

where

. Given these definitions, we have the following result for the entry-wise sparsity structure.

Theorem 1.

Suppose

, where and

has at most

non-zeros, i.e.,

, and the dictionary

for

obeys the generalized frame property (2) with frame bounds

, where

, and

follows

(7)

Then given

,

and

, and

defined in (2), (4), (5), respectively,

with

defined in (6), solving D-RPCA(E) will recover matrices

and

.

We observe that the conditions for the recovery of

are closely related to the incoherence measures (

,

, and

) between the low-rank part,

, the dictionary,

, and the sparse component

. In general, smaller sparsity, rank, and incoherence parameters are sufficient for ensuring the recovery of the components for a particular problem. This is in line with our intuition that the more distinct the two components, the easier it should be to tease them apart. For our HS demixing problem, this indicates that a target of interest can be localized as long as its the spectral signature is appropriately different from the other materials in the scene.

Iii-B Recovery for Column-wise Sparsity Case

For the column-wise sparsity model, recall that any pair in the oracle model described in D.1 is considered optimal. To this end, we first establish the sufficient conditions for the existence of such an optimal pair

by the following lemma.

Lemma 2.

Given

,

, and

, any pair

satisfies

and

if

.

In essence, we need the incoherence parameter

to be strictly smaller than

. Next, analogous to the entry-wise case, we show that

belongs to a non-empty interval

, using which solving D-RPCA(C) recovers an optimal pair in the oracle model D.1 in accordance with Lemma 2. Here, for a constant ,

and

are defined as

(8)

This leads us to the following result for the column-wise case.

Theorem 3.

Suppose

with

defining the oracle model

, where

,

for

. Given

,

,

,

as defined in (2), (3), (4), (5), respectively, and any

, for

defined in (8), solving D-RPCA(C) will recover a pair of components

, if the dictionary

obeys the generalized frame property D.2 with frame bounds

, for

.

Theorem 3 outlines the sufficient conditions under which the solution to the optimization problem D-RPCA(C) will be in the oracle model defined in D.1. Here, for a case where

, which can be easily met by a tight frame when

, constant

, and

, we have

, which is of same order as in the Outlier Pursuit (OP) [19]. Moreover, our numerical results in [12] show that D-RPCA(C) can be much more robust than OP, and may recover

even when the rank of

is high and the number of outliers

is a constant proportion of

. This implies that, D-RPCA(C) will succeed as long as the dictionary

can successfully represent the target of interest while rejecting the columns of the data matrix

corresponding to materials other than the target.

0:  , , , , , , and
   Initialize: , , , and set .
  while not converged do
     Generate points
and
using momentum:
           ,
           .
     Take a gradient step using these points :
           ,
           .
     Update Low-rank part via singular value thresholding:
           ,
           .
     Update the Dictionary Sparse part:
        
     Update the momentum term parameter
:
           .
     Update the continuation parameter
:
           .
     
  end while
   return ,
Algorithm 1 APG Algorithm for D-RPCA(E) and D-RPCA(C), adapted from [39]

Iv Algorithmic Considerations

The optimization problems of interest, D-RPCA(E) and D-RPCA(C), for the entry-wise and column-wise case, respectively, are convex but non-smooth. To solve for the components of interest, we adopt the accelerated proximal gradient (APG) algorithm, as shown in Algorithm 1. Note that [39] also applied the APG algorithm for D-RPCA(E), and we present a unified algorithm for both sparsity cases for completeness.

Iv-a Background

The APG algorithm is motivated from a long line of work starting with [42], which showed the existence of a first order algorithm with a convergence rate of

for a smooth convex objective, where

denotes the iterations. Following this, [43] developed the popular fast iterative shrinkage-thresholding algorithm (FISTA) which achieves this convergence rate for convex non-smooth objectives by accelerating the proximal gradient descent algorithm using a momentum term (the term

in Algorithm 1) as prescribed by [42]. As a result, it became a staple to solve a wide range of convex non-smooth tasks including matrix completion [44], and robust PCA [45] and its variants [39, 19]. Also, recently [46] has shown further improvements in the rate of convergence.

In addition to the momentum term, the APG procedure operates by evaluating the gradient at a point further in the direction pointed by the negative gradient. Along with faster convergence, this insight about the next point minimizes the oscillations around the optimum point; see [43] and references therein.

Iv-B Discussion of Algorithm 1

For the optimization problem of interest, we solve an unconstrained problem by transforming the equality constraint to a least-square term which penalizes the fit. In particular, the problems of interest we will solve via the APG algorithm are given by

(9)

for the entry-wise sparsity case, and

(10)

for the column-wise sparsity case. We note that although for the application at hand, the thin dictionary case with () might be more useful in practice, Algorithm 1 allows for the use of fat dictionaries () as well.

Method Threshold Performance at best operating point AUC
TPR FPR
4 0.01 D-RPCA(E) 0.300 0.979 0.023 0.989
RPCA 0.650 0.957 0.049 0.974
MF N/A 0.957 0.036 0.994
MF N/A 0.914 0.104 0.946
0.1 D-RPCA(E) 0.800 0.989 0.017 0.997
RPCA 0.800 0.989 0.014 0.997
MF N/A 0.989 0.016 0.998
MF N/A 0.989 0.010 0.998
0.5 D-RPCA(E) 0.600 0.968 0.031 0.991
RPCA 0.600 0.935 0.067 0.988
MF N/A 0.548 0.474 0.555
MF N/A 0.849 0.119 0.939
(a) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
10 0.01 D-RPCA(E) 0.600 0.935 0.060 0.972
RPCA 0.700 0.978 0.023 0.990
MF N/A 0.624 0.415 0.681
MF N/A 0.569 0.421 0.619
0.1 D-RPCA(E) 0.500 0.968 0.029 0.993
RPCA 0.500 0.871 0.144 0.961
MF N/A 0.688 0.302 0.713
MF N/A 0.527 0.469 0.523
0.5 D-RPCA(E) 1.000 0.978 0.031 0.996
RPCA 2.200 0.849 0.113 0.908
MF N/A 0.807 0.309 0.781
MF N/A 0.527 0.465 0.539
(b) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
15 D-RPCA(E) 0.300 0.989 0.021 0.998
RPCA 3.000 0.849 0.146 0.900
MF N/A 0.957 0.085 0.978
MF N/A 0.796 0.217 0.857
(c) Dictionary by sampling voxels,
Method TPR FPR AUC
Mean St.Dev. Mean St.Dev. Mean St.Dev.
D-RPCA(E) 0.972 0.019 0.030 0.014 0.991 0.009
RPCA 0.919 0.061 0.079 0.055 0.959 0.040
MF 0.796 0.179 0.234 0.187 0.814 0.178
MF 0.739 0.195 0.258 0.192 0.775 0.207
(d) Average performance
TABLE I: Entry-wise sparsity model for the Indian Pines Dataset. Simulation results are presented for our proposed approach (D-RPCA(E)), robust-PCA based approach on transformed data
(RPCA), matched filtering (MF) on original data
, and matched filtering on transformed data
(MF), across dictionary elements , and the regularization parameter for initial dictionary learning procedure ; see Algorithm 2 . Threshold selects columns with column-norm greater than threshold such that AUC is maximized. For each case, the best performing metrics are reported in bold for readability. Further,

denotes the case where ROC curve was “flipped” (i.e. classifier output was inverted to achieve the best performance).

Algorithm 1 also employs a continuation technique [45], which can be viewed as a “warm start” procedure. Here, we initialize the parameter

at some large value and geometrically reduced until it reaches a value

. A smaller choice of

results in a solution which is closer to the optimal solution of the constrained problem. Further, as

approaches zero, (9) and (10) recover the optimal solution of D-RPCA(E) and D-RPCA(C), respectively. Moreover, Algorithm 1 also utilizes the knowledge of the smoothness constant

(the Lipschitz constant of gradient) to set the step-size parameter.

Specifically, the APG algorithm requires that the gradient of the smooth part,

of the convex objectives shown in (9) and (10) is Lipschitz continuous with minimum Lipschitz constant

. Now, since the gradient

with respect to

is given by

we have that the gradient

is Lipschitz continuous as

where

as shown in Algorithm 1.

The update of the low-rank component and the sparse matrix

for the entry-wise case, both involve a soft thresholding step,

, where for a matrix

,

is defined as

In case of the low-rank part we apply this function to the singular values (therefore referred to as singular value thresholding) [44], while for the update of the dictionary sparse component, we apply it to the sparse coefficient matrix

.

The low-rank update step for the column-wise case remains the same as for the entry-wise case. However, for the update of the column-wise case we threshold the columns of

based on their column norms, i.e., for a column

of a matrix

, the column-norm based soft-thresholding function,

is defined as

Iv-C Parameter Selection

Since the choice of regularization parameters by our main theoretical results contain quantities (such as incoherence etc.) that cannot be evaluated in practice, we employ a grid-search strategy over the range of admissible values for the low-rank and dictionary sparse component to find the best values of the regularization parameters. We now discuss the specifics of the grid-search for each sparsity case.

Iv-C1 Selecting parameters for the entry-wise case

The choice of parameters

and

in Algorithm 1 is based on the optimality conditions of the optimization problem shown in (9). As presented in [39], the range of parameters

and

associated with the low-rank part

and the sparse coefficient matrix

, respectively, lie in

and

, i.e., for Algorithm 1

.

These ranges for

and

are derived using the optimization problem shown in (9). Specifically, we find the largest values of these regularization parameters which yield a

solution for the pair

by analyzing the optimality conditions of (9). This value of the regularization parameter then defines the upper bound on the range. For instance, let

and

, then the optimality condition is given by

where the sub-differential set

is defined as

Therefore, for a zero solution pair

we have that

which yields the condition that

. Therefore, the maximum value of

which drives the low-rank part to an all-zero solution is

.

Similarly, for the dictionary sparse component the optimality condition for choosing

is given by

where the the sub-differential set

is defined as

Again, for a zero solution pair

we need that

which implies that

. Meaning, that the maximum value of

that drives the dictionary sparse part to zero is

.

Iv-C2 Selecting parameters for the column-wise case

Again, the choice of parameters

and

is derived from the optimization problem shown in (10). In this case, the range of parameters

and

associated with the low-rank part

and the sparse coefficient matrix

, respectively, lie in

and

, i.e., for Algorithm 1

. The range of regularization parameters are evaluated using the analysis similar to the entry-wise case, by analyzing the optimality conditions for the optimization problem shown in (10), instead of (9).

V Experimental Evaluation

We now evaluate the performance of the proposed technique on real HS data. We begin by introducing the dataset used for the simulations, following which we describe the experimental set-up and present the results.

V-a Data

Indian Pines Dataset: We first consider the “Indian Pines” dataset [14], which was collected over the Indian Pines test site in North-western Indiana in the June of 1992 using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [47] sensor, a popular choice for collecting HS images for various remote sensing applications. This dataset consists of spectral reflectances across bands in wavelength of ranges nm from a scene which is composed mostly of agricultural land along with two major dual lane highways, a rail line and some built structures, as shown in Fig. 3(a). The dataset is further processed by removing the bands corresponding to those of water absorption, which results in a HS data-cube with dimensions is as visualized in Fig. 1. Here, and . This modified dataset is available as “corrected Indian Pines” dataset [14], with the ground-truth containing classes; Henceforth, referred to as the “Indian Pines Dataset". We form the data matrix by stacking each voxel of the image side-by-side, which results in a