# A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing

We consider the task of localizing targets of interest in a hyperspectral (HS) image based on their spectral signature(s), by posing the problem as two distinct convex demixing task(s). With applications ranging from remote sensing to surveillance, this task of target detection leverages the fact that each material/object possesses its own characteristic spectral response, depending upon its composition. However, since signatures of different materials are often correlated, matched filtering-based approaches may not be apply here. To this end, we model a HS image as a superposition of a low-rank component and a dictionary sparse component, wherein the dictionary consists of the a priori known characteristic spectral responses of the target we wish to localize, and develop techniques for two different sparsity structures, resulting from different model assumptions. We also present the corresponding recovery guarantees, leveraging our recent theoretical results from a companion paper. Finally, we analyze the performance of the proposed approach via experimental evaluations on real HS datasets for a classification task, and compare its performance with related techniques.

## Authors

• 9 publications
• 21 publications
• 4 publications
• 26 publications
• ### Target-based Hyperspectral Demixing via Generalized Robust PCA

Localizing targets of interest in a given hyperspectral (HS) image has a...
02/26/2019 ∙ by Sirisha Rambhatla, et al. ∙ 0

• ### Semiblind Hyperspectral Unmixing in the Presence of Spectral Library Mismatches

The dictionary-aided sparse regression (SR) approach has recently emerge...
07/07/2015 ∙ by Xiao Fu, et al. ∙ 0

• ### A Dictionary-Based Generalization of Robust PCA Part I: Study of Theoretical Properties

We consider the decomposition of a data matrix assumed to be a superposi...
02/21/2019 ∙ by Sirisha Rambhatla, et al. ∙ 0

• ### A Dictionary Based Generalization of Robust PCA

We analyze the decomposition of a data matrix, assumed to be a superposi...
02/21/2019 ∙ by Sirisha Rambhatla, et al. ∙ 0

• ### Sparse and Low-Rank Decomposition for Automatic Target Detection in Hyperspectral Imagery

Given a target prior information, our goal is to propose a method for au...
11/24/2017 ∙ by Ahmad W. Bitar, et al. ∙ 0

• ### Automatic Target Detection for Sparse Hyperspectral Images

This chapter introduces a novel target detector for hyperspectral imager...
04/14/2019 ∙ by Ahmad W. Bitar, et al. ∙ 0

• ### Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters

Many materials have distinct spectral profiles. This facilitates estimat...
05/13/2019 ∙ by Vishwanath Saragadam, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Hyperspectral (HS) imaging is an imaging modality which senses the intensities of the reflected electromagnetic waves (responses) corresponding to different wavelengths of the electromagnetic spectra, often invisible to the human eye. As the spectral response associated with an object/material is dependent on its composition, HS imaging lends itself very useful in identifying the said target objects/materials via their characteristic spectra or signature responses, also referred to as endmembers in the literature. Typical applications of HS imaging range from monitoring agricultural use of land, catchment areas of rivers and water bodies, food processing and surveillance, to detecting various minerals, chemicals, and even presence of life sustaining compounds on distant planets; see [1, 2], and references therein for details. However, often, these spectral signatures are highly correlated, making it difficult to detect regions of interest based on these endmembers. In this work, we present two techniques to localize target materials/objects in a given HS image based on some structural assumptions on the data, using the a priori known signatures of the target of interest.

The primary property that enables us to localize a target is the approximate low-rankness of HS images when represented as a matrix, owing to the fact that a particular scene is composed of only a limited type of objects/materials [3]. For instance, while imaging an agricultural area, one would expect to record responses from materials like biomass, farm vehicles, roads, houses, water bodies, and so on. Moreover, the spectra of complex materials can be assumed to be a linear mixture of the constituent materials [3, 4], i.e. the received HS responses can be viewed as being generated by a linear mixture model [5]. For the target localization task at hand, this approximate low-rank structure is used to decompose a given HS image into a low-rank part, and a component that is sparse in a known dictionary – a dictionary sparse part– wherein the dictionary is composed of the spectral signatures of the target of interest. We begin by formalizing the specific model of interest in the next section.

### I-a Model

A HS sensor records the response of a region, which corresponds to a pixel in the HS image as shown in Fig. 1, to different frequencies of the electromagnetic spectrum. As a result, each HS image , can be viewed as a data-cube formed by stacking matrices of size , as shown in Fig. 1. Therefore, each volumetric element or voxel

, of a HS image is a vector of length

, and represents the response of the material to measurement channels. Here, is determined by the number of channels or frequency bands across which measurements of the reflectances are made.

Formally, let be formed by unfolding the HS image , such that, each column of corresponds to a voxel of the data-cube. We then model as arising from a superposition of a low-rank component with rank , and a dictionary-sparse component, expressed as , i.e.,

 M=L+DS. (1)

Here, represents an a priori known dictionary composed of appropriately normalized characteristic responses of the material/object (or the constituents of the material), we wish to localize, and refers to the sparse coefficient matrix (also referred to as abundances in the literature). Note that can also be constructed by learning a dictionary based on the known spectral signatures of a target; see [6, 7, 8, 9].

### I-B Our Contributions

In this work, we present two techniques111The code is made available at github.com/srambhatla/Diction ary-based-Robust-PCA. for target detection in a HS image, depending upon different sparsity assumptions on the matrix

 S

, by modeling the data as shown in (1). Building on the theoretical results of [10, 11, 12], our techniques operate by forming the dictionary

 D

using the a priori known spectral signatures of the target of interest, and leveraging the approximate low-rank structure of the data matrix

 M

[13]. Here, the dictionary

 D

can be formed from the a priori known signatures directly, or by learning an appropriate dictionary based on target data; see [6, 7, 8, 9].

We consider two types of sparsity structures for the coefficient matrix

 S

, namely, a) global or entry-wise sparsity, wherein we let the matrix

 S

have

 se

non-zero entries globally, and b) column-wise sparse structure, where at most

 sc

columns of the matrix

 S

have non-zero elements. The choice of a particular sparsity model depends on the properties of the dictionary matrix

 D

. In particular, if the target signature admits a sparse representation in the dictionary, entry-wise sparsity structure is preferred. This is likely to be the case when the dictionary is overcomplete (

 f

) or fat, and also when the target spectral responses admit a sparse representation in the dictionary. On the other hand, the column-wise sparsity structure is amenable to cases where the representation can use all columns of the dictionary. This potentially arises in the cases when the dictionary is undercomplete (

 f≥d

) or thin. Note that, in the column-wise sparsity case, the non-zero columns need not be sparse themselves. The applicability of these two modalities is also exhibited in our experimental analysis; see Section V for further details. Further, we specialize the theoretical results of [12], to present the conditions under which such a demixing task will succeed under the two sparsity models discussed above; see also [10] and [11].

Next, we analyze the performance of the proposed techniques via extensive experimental evaluations on real-world demixing tasks over different datasets and dictionary choices, and compare the performance of the proposed techniques with related works. This demixing task is particularly challenging since the spectral signatures of even distinct classes are highly correlated to each other, as shown in Fig. 2. The shaded region here shows the upper and lower ranges of different classes. For instance, in Fig. 2 we observe that the spectral signature of the “Stone-Steel” class is similar to that of class “Wheat”. This correlation between the spectral signatures of different classes results in an approximate low-rank structure of the data, captured by the low-rank component

 L

, while the dictionary-sparse component

 DS

is used to identify the target of interest. We specifically show that such a decomposition successfully localizes the target despite the high correlation between spectral signatures of distinct classes.

Finally, it is worth noting that although we consider thin dictionaries (

 f≥d

) for the purposes of this work, since it is more suitable for the current exposition, our theoretical results are also applicable for the fat case (

 f

); see [10],[11], and [12] for further details.

### I-C Prior Art

The model shown in (1) is closely related to a number of well-known problems. To start, in the absence of the dictionary sparse part

 DS

, (1

) reduces to the popular problem of principal component analysis (PCA)

[15, 16]. The problem considered here also shares its structure with variants of PCA, such as robust-PCA [17, 18] (with

 D=I

for an identity matrix

 I

,) outlier pursuit

[19] (where

 D=I

and

 S

is column-wise sparse,) and others [20, 21, 22, 23, 24, 25, 26, 27, 28].

On the other hand, the problem can be identified as that of sparse recovery [29, 30, 31, 32], in the absence of the low-rank part

 L

. Following which, sparse recovery methods for analysis of HS images have been explored in [33, 34, 35, 36]. In addition, in a recent work [37], the authors further impose a low-rank constraint on the coefficient matrix

 S

for the demixing task. Further, applications of compressive sampling have been explored in [38], while [5] analyzes the case where HS images are noisy and incomplete. The techniques discussed above focus on identifying all materials in a given HS image. However, for target localization tasks, it is of interest to identify only specific target(s) in a given HS image. As a result, there is a need for techniques which localize targets based on their a priori known spectral signatures.

The model described in (1) was introduced in [39] as a means to detect traffic anomalies in a network, wherein, the authors focus on a case where the dictionary

 D

is overcomplete, i.e., fat, and the rows of

 D

are orthogonal, e.g., . Here, the coefficient matrix

 S

is assumed to possess at most nonzero elements per row and column, and nonzero elements globally. In a recent work [10] and the accompanying theoretical work [12], we analyze the extension of [39] to include a case where the dictionary has more rows than columns, i.e., is thin, while removing the orthogonality constraint for both the thin and the fat dictionary cases, when is small. This case is particularly amenable for the target localization task at hand, since often we aim to localize targets based on a few a priori known spectral signatures. To this end, we focus our attention on the thin case, although a similar analysis applies for the fat case [10]; see also [12].

### I-D Related Techniques

To study the properties of our techniques, we compare and contrast their performance with related works. First, as a sanity check, we compare the performance of the proposed techniques with matched filtering-based methods (detailed in Section V). In addition, we compare the performance of our techniques to other closely related methods based on the sparsity assumptions on the matrix

 S

, as described below.

For entry-wise sparse structure: The first method we compare to is based on the observation that in cases where the known dictionary is thin, we can multiply (1) on the left by the pseudo-inverse of , say , in which case, the model shown in (1) reduces to that of robust PCA, i.e.,

 ˜M=˜L+S, (RPCA†)

where and . Therefore, in this case, we can recover the sparse matrix by robust PCA [17, 18]

, and estimate the low-rank part using the estimate of

. Note that this is not applicable for the fat case due to the non-trivial null space of its pseudo-inverse.

Although at a first glance this seems like a reasonable technique, somewhat surprisingly, it does not succeed for all thin dictionaries. Specifically, in cases where

 r

, the rank of

 L

, is greater than the number of dictionary elements

 d

, the pseudo-inversed component

 ˜L

is no longer “low-rank.” In fact, since the notion of low-rankness is relative to the potential maximum rank of the component,

 ˜L

can be close to full-rank. As a result, the robust PCA model shown in RPCA is no longer applicable and the demixing task may not succeed; see our corresponding theoretical paper [12] for details.

Moreover, even in cases where RPCA succeeds (

 r

), our proposed one-shot procedure guarantees the recovery of the two components under some mild conditions, while the pseudo-inverse based procedure RPCA will require a two-step procedure – one to recover the sparse coefficient matrix and other to recover the low-rank component – in addition to a non-trivial analysis of the interaction between and the low-rank part

 L

. This is also apparent from our experiments shown in Section V, which indicate that optimization based on the model in (1) is more robust as compared to RPCA for the classification problem at hand across different choices of the dictionaries.

For column-wise sparse structure: The column-wise sparse structure of the matrix

 S

results in a column-wise sparse structure of the dictionary-sparse component

 DS

. As a result, the model at hand is similar to that studied in OP [19]. Specifically, the OP technique is aimed at identifying the outlier columns in a given matrix. However, it fails in cases where the target of interest is not an outlier, as in case of HS data. On the other hand, since the proposed technique uses the dictionary

 D

corresponding to the spectral signatures of the target of interest to guide the demixing procedure, it results in a spectral signature-driven technique for target localization. This distinction between the two procedures is also discussed in our corresponding theoretical work [12] Section V, and is exemplified by our experimental results shown in Section V.

Further, as in the entry-wise case, one can also envision a pseudo-inverse based procedure to identify the target of interest via OP [19] on the pseudo-inversed data (referred to as OP in our discussion) i.e.,

 ˜M=˜L+S, (OP†)

where and , with

 S

admitting a column-wise sparse structure. However, this variant of OP does not succeed when the rank of the low-rank component is greater than the number of dictionary elements, i.e.,

 r≥d

, as in the previous case; see our related theoretical work for details [12] Section V.

The rest of the paper is organized as follows. We formulate the problem and introduce relevant theoretical quantities in Section II, followed by specializing the theoretical results for the current application in Section III. Next, in Section IV, we present the specifics of the algorithms for the two cases. In Section V, we describe the experimental set-up and demonstrate the applicability of the proposed approaches via extensive numerical simulations on real HS datasets for a classification task. Finally, we conclude this discussion in Section VI.

Notation: Given a matrix

 X

,

 Xi

denotes its -th column and

 Xi,j

denotes the

 (i,j)

element of

 X

. We use

 ∥X∥:=σmax(X)

for the spectral norm, where

 σmax(X)

denotes the maximum singular value of the matrix,

 ∥X∥∞:=maxi, j|Xij|

,

 ∥X∥∞,∞:=maxi∥e⊤iX∥1

, and

 ∥X∥∞,2:=maxi∥Xei∥

, where

 ei

denotes the canonical basis vector with

 1

at the

 i

-th location and

 0

elsewhere. Further, we denote the

 ℓ2

-norm of a vector

 x

as

 ∥x∥

 ∥.∥∗

,

 ∥.∥1

, and

 ∥.∥1,2

refer to the nuclear norm, entry-wise

 ℓ1

- norm, and

 ℓ1,2

norm (sum of the

 ℓ2

norms of the columns) of the matrix, respectively, which serve as convex relaxations of rank, sparsity, and column-wise sparsity inducing optimization, respectively.

## Ii Problem Formulation

In this section, we introduce the optimization problem of interest and different theoretical quantities pertinent to our analysis.

### Ii-a Optimization problems

Our aim is to recover the low-rank component

 L

and the sparse coefficient matrix

 S

, given the dictionary

 D

and samples generated according to the model shown in (1). Here the coefficient matrix

 S

can either have an entry-wise sparse structure or a column-wise sparse structure. We now crystallize our model assumptions to formulate appropriate convex optimization problems for the two sparsity structures.

Specifically, depending upon the priors about the sparsity structure of

 S

, and the low-rank property of the component

 L

, we aim to solve the following convex optimization problems, i.e.,

 min L,S∥L∥∗+λe∥S∥1  s.t.  M=L+DS (D-RPCA(E))

for the entry-wise sparsity case, and

 min L,S∥L∥∗+λc∥S∥1,2 s.t. M=L+DS (D-RPCA(C))

for the column-wise sparse case, to recover

 L

and

 S

with regularization parameters

 λe≥0

and

 λc≥0

, given the data

 M

and the dictionary

 D

. Here, the a priori known dictionary

 D

is assumed to be undercomplete (thin, i.e.,

 d≤f

) for the application at hand. Analysis of a more general case can be found in[12]. Further, here “D-RPCA” refers to “dictionary based robust principal component analysis”, while the qualifiers “E” and “C” indicate the entry-wise and column-wise sparsity patterns, respectively.

Note that, in the column-wise sparse case there is an inherent ambiguity regarding the recovery of the true component pairs

 (L,S)

corresponding to the low-rank part and the dictionary sparse component, respectively. Specifically, any pair

 (L0,S0)

satisfying

 M=L0+DS0=L+DS

, where

 L0

and

 L

have the same column space, and

 S0

and

 S

have the identical column support, is a solution of D-RPCA(C). To this end, we define the following oracle model to characterize the optimality of any solution pair

 (L0,S0)

.

###### Definition D.1 (Oracle Model for Column-wise Sparse Case).

Let the pair

 (L,S)

be the matrices forming the data

 M

as per (1), define the corresponding oracle model

 {M,U,ISc}

. Then, any pair

 (L0,S0)

is in the Oracle Model

 {M,U,ISc}

, if

 PU(L0)=L

,

 PSc(DS0)=DS

and

 L0+DS0=L+DS=M

hold simultaneously, where

 PU

and

 PSc

are projections onto the column space

 U

of

 L

and column support

 ISc

of

 S

, respectively.

For this case, we then first establish the sufficient conditions for the existence of a solution based on some incoherence conditions. Following which, our main result for the column-wise case states the sufficient conditions under which solving a convex optimization problem recovers a solution pair

 (L0,S0)

in the oracle model.

### Ii-B Conditions on the Dictionary

For our analysis, we require that the dictionary

 D

follows the generalized frame property (GFP) defined as follows.

###### Definition D.2.

A matrix

 D

satisfies the generalized frame property (GFP), on vectors

 v∈R

, if for any fixed vector

 v∈R

where

 v≠0

, we have

 αℓ∥v∥22≤∥Dv∥22≤αu∥v∥22,

where

 αℓ

and

 αu

are the lower and upper generalized frame bounds with

 0<αℓ≤αu<∞

.

The GFP is met as long as the vector

 v

is not in the null-space of the matrix

 D

, and

 ∥D∥

is bounded. Therefore, for the thin dictionary setting

 d

for both entry-wise and column-wise sparsity cases, this condition is satisfied as long as

 D

has a full column rank, and

 R

can be the entire space. For example,

 D

being a frame[40] suffices; see [41] for a brief overview of frames.

### Ii-C Relevant Subspaces

Before we define the relevant subspaces for this discussion, we define a few preliminaries. First, let the pair

 (L0,S0)

be the solution to D-RPCA(E) (the entry-wise sparse case), and for the column-wise sparse case, let the pair

 (L0,S0)

be in the oracle model

 {M,U,ISc}

; see Definition D.1.

Next, for the low-rank matrix

 L

, let the compact singular value decomposition (SVD) be represented as

 L=UΣV⊤,

where

 U∈Rf×r

and

 V∈Rnm×r

are the left and right singular vectors of

 L

, respectively, and

 Σ

is a diagonal matrix with singular values arranged in a descending order on the diagonal. Here, matrices

 U

and

 V

each have orthogonal columns. Further, let

 L

be the linear subspace consisting of matrices spanning the same row or column space as

 L

, i.e.,

 L:={UW⊤1+W2V⊤,W1∈Rnm×r,W2∈Rf×r}.

Next, let

 Se

(

 Sc

) be the space spanned by

 d×nm

matrices with the same non-zero support (column support, denoted as csupp) as

 S

, and let

 D

be defined as

 D:={DH},where{H∈Se for entry-wise case,csupp(H)⊆ISc % for column-wise case.

Here,

 ISc

denotes the index set containing the non-zero column indices of

 S

for the column-wise sparsity case. In addition, we denote the corresponding complements of the spaces described above by appending ‘’.

We use calligraphic ‘

 P(⋅)

’ to denote the projection operator onto a subspace defined by the subscript, and ‘

 P

’ to denote the corresponding projection matrix with the appropriate subscripts. Therefore, using these definitions the projection operators onto and orthogonal to the subspace

 L

are defined as

 PL(L) =PUL+LPV−PULPV

and

 PL⊥(L) =(I−PU)L(I−PV),

respectively.

### Ii-D Incoherence Measures

We also employ various notions of incoherence to identify the conditions under which our procedures succeed. To this end, we first define the incoherence parameter

 μ

that characterizes the relationship between the low-rank part

 L

and the dictionary sparse part

 DS

, as

 μ:=maxZ∈D∖{0d×nm}∥PL(Z)∥F∥Z∥F. (2)

The parameter

 μ∈[0,1]

is the measure of degree of similarity between the low-rank part and the dictionary sparse component. Here, a larger

 μ

implies that the dictionary sparse component is close to the low-rank part. In addition, we also define the parameter

 βU

as

 βU:=max∥u∥=1∥(I−PU)Du∥2∥Du∥2, (3)

which measures the similarity between the orthogonal complement of the column-space

 U

and the dictionary

 D

.

The next two measures of incoherence can be interpreted as a way to identify the cases where for

 L

with SVD as

 L=UΣV⊤

: (a)

 U

resembles the dictionary

 D

, and (b)

 V

resembles the sparse coefficient matrix

 S

. In these cases, the low-rank part may resemble the dictionary sparse component. To this end, similar to [39], we define the following measures to identify these cases as

 (a) γU:=maxi∥PUDei∥2∥Dei∥2 and (b) γV:=maxi∥PVei∥2. (4)

Here,

 0≤γU≤1

achieves the upper bound when a dictionary element is exactly aligned with the column space

 U

of the

 L

, and lower bound when all of the dictionary elements are orthogonal to

 U

. Moreover,

 γV∈[r/nm,1]

achieves the upper bound when the row-space of

 L

is “spiky”, i.e., a certain row of

 V

is

 1

-sparse, meaning that a column of

 L

is supported by (can be expressed as a linear combination of) a column of

 U

. The lower bound here is attained when it is “spread-out”, i.e., each column of

 L

is a linear combination of all columns of

 U

. In general, our recovery of the two components is easier when the incoherence parameters

 γU

and

 γV

are closer to their lower bounds. In addition, for notational convenience, we define constants

 ξe:=∥D⊤UV⊤∥∞  and ξc :=∥D⊤UV⊤∥∞,2. (5)

Here,

 ξe

is the maximum absolute entry of

 D⊤UV⊤

, which measures how close columns of

 D

are to the singular vectors of . Similarly, for the column-wise case,

 ξc

measures the closeness of columns of

 D

to the singular vectors of under a different metric (column-wise maximal

 ℓ2

-norm).

## Iii Theoretical Results

In this section, we specialize our theoretical results [12] for the HS demixing task. Specifically, we provide the main results corresponding to each sparsity structure of for the thin dictionary case considered here. We start with the theoretical results for the entry-wise sparsity case, and then present the corresponding theoretical guarantees for the column-wise sparsity structure; see [12] for detailed proofs.

### Iii-a Exact Recovery for Entry-wise Sparsity Case

For the entry-wise case, our main result establishes the existence of a regularization parameter

 λe

, for which solving the optimization problem D-RPCA(E) will recover the components

 L

and

 S

exactly. To this end, we will show that such a

 λe

belongs to a non-empty interval

 [λmine,λmaxe]

, where

 λmine

and

 λmaxe

are defined as

 λmine:=1+Ce1−Ce ξe % and λmaxe:=√αℓ(1−μ)−√rαuμ√se. (6)

Here,

 Ce(αu,αℓ,γU,γV,se,d,k,μ)

where

 0≤Ce<1

is a constant that captures the relationship between different model parameters, and is defined as

 Ce:=cαℓ(1−μ)2−c,

where

 c=αu2((1+2γU)(min(se,d)+seγV)+2γVmin(se,nm))−αℓ2(min(se,d)+seγV)

. Given these definitions, we have the following result for the entry-wise sparsity structure.

###### Theorem 1.

Suppose

 M=L+DS

, where and

 S

has at most

 se

non-zeros, i.e.,

 ∥S∥0≤se≤smaxe:=(1−μ)22nmr

, and the dictionary

 D∈Rf×d

for

 d≤f

obeys the generalized frame property (2) with frame bounds

 [αℓ,αu]

, where

 0<αℓ≤1(1−μ)2

, and

 γU

follows

 γU≤⎧⎪ ⎪⎨⎪ ⎪⎩(1−μ)2−2seγV2se(1+γV), for se≤min (d,smaxe)(1−μ)2−2seγV2(d+seγV), for d

Then given

 μ∈[0,1]

,

 γU

and

 γV∈[r/nm,1]

, and

 ξe

defined in (2), (4), (5), respectively,

 λe∈[λmine,λmaxe]

with

 λmaxe>λmine≥0

defined in (6), solving D-RPCA(E) will recover matrices

 L

and

 S

.

We observe that the conditions for the recovery of

 (L,S)

are closely related to the incoherence measures (

 μ

,

 γV

, and

 γU

) between the low-rank part,

 L

, the dictionary,

 D

, and the sparse component

 S

. In general, smaller sparsity, rank, and incoherence parameters are sufficient for ensuring the recovery of the components for a particular problem. This is in line with our intuition that the more distinct the two components, the easier it should be to tease them apart. For our HS demixing problem, this indicates that a target of interest can be localized as long as its the spectral signature is appropriately different from the other materials in the scene.

### Iii-B Recovery for Column-wise Sparsity Case

For the column-wise sparsity model, recall that any pair in the oracle model described in D.1 is considered optimal. To this end, we first establish the sufficient conditions for the existence of such an optimal pair

 (L0,S0)

by the following lemma.

###### Lemma 2.

Given

 M

,

 D

, and

 (L,Sc,D)

, any pair

 (L0,S0)∈{M,U,ISc}

satisfies

 span{col(L0)}=U

and

 csupp(S0)=ISc

if

 μ<1

.

In essence, we need the incoherence parameter

 μ

to be strictly smaller than

 1

. Next, analogous to the entry-wise case, we show that

 λc

belongs to a non-empty interval

 [λminc,λmaxc]

, using which solving D-RPCA(C) recovers an optimal pair in the oracle model D.1 in accordance with Lemma 2. Here, for a constant ,

 λminc

and

 λmaxc

are defined as

 λminc:=ξc+√rscαuμCc1−scCc and λmaxc:=√αℓ(1−μ)−√rαuμ√sc. (8)

This leads us to the following result for the column-wise case.

###### Theorem 3.

Suppose

 M=L+DS

with

 (L,S)

defining the oracle model

 {M,U,ISc}

, where

 rank(L)=r

,

 |ISc|=sc

for

 sc≤smaxc:=αℓαuγV⋅(1−μ)2βU

. Given

 μ∈[0,1)

,

 βU

,

 γV∈[r/nm,1]

,

 ξc

as defined in (2), (3), (4), (5), respectively, and any

 λc∈[λminc,λmaxc]

, for

 λmaxc>λminc≥0

defined in (8), solving D-RPCA(C) will recover a pair of components

 (L0,S0)∈{M,U,ISc}

, if the dictionary

 D∈Rf×d

obeys the generalized frame property D.2 with frame bounds

 [αℓ,αu]

, for

 αℓ>0

.

Theorem 3 outlines the sufficient conditions under which the solution to the optimization problem D-RPCA(C) will be in the oracle model defined in D.1. Here, for a case where

 1≲αl≤αu≲1

, which can be easily met by a tight frame when

 f>d

, constant

 (1−μ)2βU

, and

 γV=Θ(rnm)

, we have

 smaxc=O(nmr)

, which is of same order as in the Outlier Pursuit (OP) [19]. Moreover, our numerical results in [12] show that D-RPCA(C) can be much more robust than OP, and may recover

 {U,IC}

even when the rank of

 L

is high and the number of outliers

 sc

is a constant proportion of

 m

. This implies that, D-RPCA(C) will succeed as long as the dictionary

 D

can successfully represent the target of interest while rejecting the columns of the data matrix

 M

corresponding to materials other than the target.

## Iv Algorithmic Considerations

The optimization problems of interest, D-RPCA(E) and D-RPCA(C), for the entry-wise and column-wise case, respectively, are convex but non-smooth. To solve for the components of interest, we adopt the accelerated proximal gradient (APG) algorithm, as shown in Algorithm 1. Note that [39] also applied the APG algorithm for D-RPCA(E), and we present a unified algorithm for both sparsity cases for completeness.

### Iv-a Background

The APG algorithm is motivated from a long line of work starting with [42], which showed the existence of a first order algorithm with a convergence rate of

 O(1/k2)

for a smooth convex objective, where

 k

denotes the iterations. Following this, [43] developed the popular fast iterative shrinkage-thresholding algorithm (FISTA) which achieves this convergence rate for convex non-smooth objectives by accelerating the proximal gradient descent algorithm using a momentum term (the term

 t[k−1]−1t[k]

in Algorithm 1) as prescribed by [42]. As a result, it became a staple to solve a wide range of convex non-smooth tasks including matrix completion [44], and robust PCA [45] and its variants [39, 19]. Also, recently [46] has shown further improvements in the rate of convergence.

In addition to the momentum term, the APG procedure operates by evaluating the gradient at a point further in the direction pointed by the negative gradient. Along with faster convergence, this insight about the next point minimizes the oscillations around the optimum point; see [43] and references therein.

### Iv-B Discussion of Algorithm 1

For the optimization problem of interest, we solve an unconstrained problem by transforming the equality constraint to a least-square term which penalizes the fit. In particular, the problems of interest we will solve via the APG algorithm are given by

 min L,Sν∥L∥∗+νλe∥S∥1+12∥M−L−DS∥2F (9)

for the entry-wise sparsity case, and

 min L,Sν∥L∥∗+νλc∥S∥1,2+12∥M−L−DS∥2F, (10)

for the column-wise sparsity case. We note that although for the application at hand, the thin dictionary case with () might be more useful in practice, Algorithm 1 allows for the use of fat dictionaries () as well.

Algorithm 1 also employs a continuation technique [45], which can be viewed as a “warm start” procedure. Here, we initialize the parameter

 ν0

at some large value and geometrically reduced until it reaches a value

 ¯ν

. A smaller choice of

 ¯ν

results in a solution which is closer to the optimal solution of the constrained problem. Further, as

 ν

approaches zero, (9) and (10) recover the optimal solution of D-RPCA(E) and D-RPCA(C), respectively. Moreover, Algorithm 1 also utilizes the knowledge of the smoothness constant

 Lf

(the Lipschitz constant of gradient) to set the step-size parameter.

Specifically, the APG algorithm requires that the gradient of the smooth part,

of the convex objectives shown in (9) and (10) is Lipschitz continuous with minimum Lipschitz constant

 Lf

 ∇f(L,S)

with respect to

 [LS]⊤

is given by

 ∇f(L,S)=[ID]⊤(M−[ID][LS]),

 ∇f

is Lipschitz continuous as

 ∥∇f(L1,S1)−∇f(L2,S2)∥≤Lf∥[L1S1]−[L2S2]∥,

where

 Lf=∥[ID]⊤[ID]∥=λmax([ID]⊤[ID]),

as shown in Algorithm 1.

The update of the low-rank component and the sparse matrix

 S

for the entry-wise case, both involve a soft thresholding step,

 Sτ(.)

, where for a matrix

 Y

,

 Sτ(Yij)

is defined as

 Sτ(Yij)=sgn(Yij)max(|Yij−τ|,0).

In case of the low-rank part we apply this function to the singular values (therefore referred to as singular value thresholding) [44], while for the update of the dictionary sparse component, we apply it to the sparse coefficient matrix

 S

.

The low-rank update step for the column-wise case remains the same as for the entry-wise case. However, for the update of the column-wise case we threshold the columns of

 S

based on their column norms, i.e., for a column

 Yj

of a matrix

 Y

, the column-norm based soft-thresholding function,

 Cτ(.)

is defined as

 Cτ(Yj)=max(Yj−τYj/∥Yj∥).

### Iv-C Parameter Selection

Since the choice of regularization parameters by our main theoretical results contain quantities (such as incoherence etc.) that cannot be evaluated in practice, we employ a grid-search strategy over the range of admissible values for the low-rank and dictionary sparse component to find the best values of the regularization parameters. We now discuss the specifics of the grid-search for each sparsity case.

#### Iv-C1 Selecting parameters for the entry-wise case

The choice of parameters

 ν

and

 λe

in Algorithm 1 is based on the optimality conditions of the optimization problem shown in (9). As presented in [39], the range of parameters

 ν

and

 νλe

associated with the low-rank part

 L

and the sparse coefficient matrix

 S

, respectively, lie in

 ν∈{0,∥M∥}

and

 νλe∈{0,∥D⊤M∥∞}

, i.e., for Algorithm 1

 ν0=∥M∥

.

These ranges for

 ν

and

 νλe

are derived using the optimization problem shown in (9). Specifically, we find the largest values of these regularization parameters which yield a

 (0,0)

solution for the pair

 (L0,S0)

by analyzing the optimality conditions of (9). This value of the regularization parameter then defines the upper bound on the range. For instance, let

 λ∗:=ν

and

 λ1:=νλe

, then the optimality condition is given by

 λ∗∂L∥L∥∗−(M−L−DS)=0,

where the sub-differential set

 ∂L∥L∥∗

is defined as

 ∂L∥L∥∗∣∣L=L0={UV⊤+W:∥W∥≤1,PL(W)=0}.

Therefore, for a zero solution pair

 (L0,S0)

we have that

 {λ∗W=M:∥W∥≤1,PL(W)=0},

which yields the condition that

 ∥M∥≤λ∗

. Therefore, the maximum value of

 λ∗

which drives the low-rank part to an all-zero solution is

 ∥M∥

.

Similarly, for the dictionary sparse component the optimality condition for choosing

 λ1

is given by

 λ1∂S∥S∥1−D⊤(M−L−DS)=0,

where the the sub-differential set

 ∂S∥S∥1

is defined as

 ∂S∥S∥1∣∣S=S0={sign(S0)+F:∥F∥∞≤1,PSe(F)=0}.

Again, for a zero solution pair

 (L0,S0)

we need that

 {λ1F=D⊤M:∥F∥∞≤1,PSe(F)=0},

which implies that

 ∥D⊤M∥∞≤λ1

. Meaning, that the maximum value of

 λ1

that drives the dictionary sparse part to zero is

 ∥D⊤M∥∞

.

#### Iv-C2 Selecting parameters for the column-wise case

Again, the choice of parameters

 ν

and

 λc

is derived from the optimization problem shown in (10). In this case, the range of parameters

 ν

and

 νλc

associated with the low-rank part

 L

and the sparse coefficient matrix

 S

, respectively, lie in

 ν∈{0,∥M∥}

and

 νλe∈{0,∥D⊤M∥∞,2}

, i.e., for Algorithm 1

 ν0=∥M∥

. The range of regularization parameters are evaluated using the analysis similar to the entry-wise case, by analyzing the optimality conditions for the optimization problem shown in (10), instead of (9).

## V Experimental Evaluation

We now evaluate the performance of the proposed technique on real HS data. We begin by introducing the dataset used for the simulations, following which we describe the experimental set-up and present the results.

### V-a Data

Indian Pines Dataset: We first consider the “Indian Pines” dataset [14], which was collected over the Indian Pines test site in North-western Indiana in the June of 1992 using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [47] sensor, a popular choice for collecting HS images for various remote sensing applications. This dataset consists of spectral reflectances across bands in wavelength of ranges nm from a scene which is composed mostly of agricultural land along with two major dual lane highways, a rail line and some built structures, as shown in Fig. 3(a). The dataset is further processed by removing the bands corresponding to those of water absorption, which results in a HS data-cube with dimensions is as visualized in Fig. 1. Here, and . This modified dataset is available as “corrected Indian Pines” dataset [14], with the ground-truth containing classes; Henceforth, referred to as the “Indian Pines Dataset". We form the data matrix by stacking each voxel of the image side-by-side, which results in a