I Introduction
Hyperspectral imaging has been widely used in many fields since it provides the ability to record the same scene with hundreds of contiguous and narrow spectral bands [1, 2]. Due to the low spatial resolution of the sensor, the spectra of spatial neighboring substances inevitably merge together, leading to mixed pixels in hyperspectral data. This makes Hyperspectral Unmixing (HU) an essential step for hyperspectral image analysis. Generally, the task of HU is to decompose each pixel spectrum into a set of constituent spectra (called endmembers, such as tree, water, soil, etc.), and their corresponding percentages (called abundances) [3, 4]. In this paper, we focus on the unsupervised HU problem, for which both endmembers and abundances are unknown, making HU a very challenging problem.
In general, the linear HU methods can be roughly classified into two categories: geometrical methods
[5, 6, 7] and statistical ones [8, 9, 10, 11]. Geometrical methods assume that hyperspectral pixels are located within a simplex, whose vertices correspond to the endmembers. NFINDR [12] and Vertex Component Analysis (VCA) [5] are two typical geometrical methods. The former treats the vertices of a simplex with the maximum volume as endmembers, while the latter searches the endmembers through projection. The above two methods are both simple and fast. Unfortunately, they require the existence of pure pixels for each endmember, which is usually unavailable in practice. Moreover, geometrical methods fail to consider local structures latent in hyperspectral data, leading to an inaccurate estimation of
endmembers and abundances.Accordingly, a number of statistical methods have been proposed. Nonnegative Matrix Factorization (NMF) [13] and its extensions are the typical statistical methods. As an unsupervised method, NMF tries to find two nonnegative matrices to approximate the original matrix with their product. The nonnegative constraint on both factor matrices is valuable for two reasons. First, the endmembers and their corresponding abundances are both nonnegative, which makes NMF physically suited to the HU problem. Second, the nonnegative constraint allows additive, not subtractive, combinations, leading to a partsbased representation [13]. The partsbased property makes representation results more intuitive and interpretable, since psychological and physiological evidences have shown human brain works in a partsbased way [14, 15].
Although NMF has enjoyed a great success in many applications, such as face analysis [16, 17] and documents clustering [18, 19], there are three weaknesses. First, the solution space of NMF is large, which is caused by the nonconvex objective function. Many extensions have been proposed to reduce the solution space by adding various constraints to the NMF framework, such as nonsmooth NMF [20], MVCNMF [21], MDMDNMF [22], GLNMF [23] and so on. Second, the partsbased property of NMF is usually not strong enough for the HU problem, resulting in a less expressive estimation of endmembers. A possible solution is to employ a sparse constraint to the NMF framework [24, 25], with a regularization parameter to control the weight of partsbased property. Third, NMF method does not consider similarities between neighboring pixels, leading to structure information unused.
In this paper we propose an effective method, named Structured Sparse NMF (SSNMF), to overcome the above three limitations of NMF. The motivation is based on two priors. First, due to the smoothness of local patches in images, there are many regions where the spectral curves of all pixels should be similar with each other. Fig. 1c illustrates an example, where the pixels in the road region have similar visual appearance (see Fig. 1d). These regions are the structures that should be preserved while unmixing. Second, most pixels in hyperspectral data are mixed by only a few endmembers [25, 26, 24]. In summary, the pixels in the same structure are sparsely mixed by a common set of relevant endmembers. This can be obviously observed from Fig. 1b, where the proportions of four colors represent the abundances of four endmembers: Red (Tree), Blue (Water), Green (Soil) and Black (Road). As can be seen, the colors in Fig. 1b are regional smooth and sparsely mixed by Red, Blue, Green and Black ink.
Based on the above analysis, we introduce the Structured Sparse constraint (SSconstraint in short) from the following two aspects. First, we model the latent structures by encoding a weighted graph on the hyperspectral pixels. In this way, the highly related neighboring pixels are grouped on the graph. Accordingly, a graph constraint is employed in SSNMF to transfer these structures into the unmixing results. Second, we use the lasso penalty to sparsely encode the abundances for each pixel. With the SSconstraint, we can learn an abundance space, in which abundances are regional smooth and sparse, as shown in Fig. 1b. Experiments on real hyperspectral data with different noise levels illustrate that SSNMF outperforms the stateoftheart methods.
Below are some remarks about our method:

We propose a meaningful SSconstraint and apply it in the NMF framework. The basic idea is to encourage highly similar pixels on the graph to share correlated sparse abundances, which is similar with [27]. With the help of SSconstraint, SSNMF overcomes the three limitations of NMF for the HU problem.

SSNMF is an effective method for HU problems. We compare our method with several stateoftheart methods on two hyperspectral data. Both quantitative and qualitative performances show that our method outperforms the stateoftheart methods significantly.
The remainder of this paper is organized as follows. In Section 2, we briefly review Linear Mixture Model (LMM) and Nonnegative Matrix Factorization (NMF) method. Section 3 presents the formulation, the optimization algorithm, the convergence proof and the analyzes of computation complexity for SSNMF method. Extensive results as well as detailed comparisons and analyses are presented in Section 4, followed by conclusions in Section 5.
Ii Preliminaries
In this section, we introduce the Linear Mixture Model (LMM) and the NMF method. The former is the fundamental model for the methods mentioned in the following sections, while the latter is the basic method we build on.
Iia Linear Mixture Model (LMM)
LMM is the most popular model for hyperspectral image analysis [28, 3]. Suppose we are given a hyperspectral image with bands and pixels . LMM assumes that each pixel can be approximated by a nonnegative linear combination of endmembers (bases) as follows
(1) 
where is the endmember matrix with endmembers, is the abundancevector associated with pixel , is a residual term. For all pixels together, we have the matrix form of LMM as
(2) 
where is a hyperspectral image with column vectors corresponding to pixels, is the abundance matrix, whose column vector contains fractional abundances of endmembers for pixel . is the th row vector in that contains fractional abundances of the th endmember (i.e. ) associated with all pixels. Here is called the th abundance map. is a residual term.
For HU problems, we have , which intimates very few endmembers are latent in the hyperspectral data space. In addition, most pixels are mixed by parts of the endmembers. Therefore, good approximations can be achieved if the following two conditions are satisfied: (1) they can unveil intrinsic structures in hyperspectral data [13], whose rank is ; (2) the learned endmembers are expressive [16].
IiB Nonnegative Matrix Factorization
NMF is a popular method that builds in terms of LMM. The aim is to approximately decompose a large nonnegative matrix into two nonnegative matrices and by minimizing the following objective function [13],
(3) 
where is the Frobenius norm, all elements in and are unknown and nonnegative. Although (3) is convex with respect to and respectively, it is nonconvex for both variables together [29]. Therefore it is unrealistic to find global minima. Alternately, [29] have proposed the multiplicative update rules as follows
(4) 
which have been proved to be nonincreasing. There are other methods to solve the problem (3), such as the activeset [30] method, the alternation nonnegative least least squares [31], the projected gradient descent [32] and so on.
Compared with the Vector Quantization (VQ) [33]
method and the Principal Component Analysis (PCA)
[34] method, the nonnegative constraint on both factor matrices enables a partsbased representation, which makes NMF superior to PCA and VQ in facial image analyses and document representations [13].Iii Structured Sparse NMF (SSNMF)
As mentioned before, NMF applies the nonnegative constraint on the factor matrices, which leads to a partsbased representation of the original data. However, it fails to take consideration of the sparse prior on the abundances and the manifold structures hidden in hyperspectral data, which are essential for the HU problem. In this section, we introduce the SSNMF method to overcome these limitations by adopting a structured sparse constraint.
Iiia Formulation for SSNMF
In the framework of SSNMF, the fundamental problem is how to construct the structures. The manifold learning theory [35] and spectral graph theory [36, 37] provide us with a good idea that local structures can be learned by encoding similarities between pixels on the nearest neighbor graph. Combining the local overlapping structures, we can get the graph structure that contains full similarity information between pixel pairs.
Typically, there are two ways to define the nearest neighbors: considering the feature distances only and the spatial distances only. The former is well suited for less geometrically intuitive databases that treat each image as a node, such as the Yale database ^{1}^{1}1http://cvc.yale.edu/projects/yalefaces/yalefaces.html and the ORL database ^{2}^{2}2http://www.uk.research.att.com/facedatabase.html [35]; While the latter is adapted to geometrically motivated databases that treat each pixel, which are on the grids of a 2D plane, as nodes, such as the hyperspectral images. Due to the local smoothness prior of the hyperspectral images, the spatially neighboring pixels are very likely to enjoy similar features (i.e. spectral signatures). However, this prior does not hold for image edges, where pixels on the two sides are dissimilar. This motivates the weighting method for the proposed SSNMF, which would be thoroughly introduced in the following paragraph.
The proposed weighting method considers the spatial distance and the feature distance simultaneously when constructing the local structures. Suppose we are given a hyperspectral image with pixels. We view each pixel as one node and construct a weighted graph on the nodes. The weight between nodes and is obtained by the Spectral Angle Distance (SAD) metric [38, 5] defined as follows,
(5) 
where
(6) 
In (5), the neighbors of are collected in , and there are two conditions for pixel : (1) nearest spatial distance, i.e., is in the ( in this paper) local window of (see Fig. 2b); (2) nearest feature distance, i.e., calculating the SAD similarities between and its neighboring pixels in the local window, is among the top 30% (empirical threshold), as shown in Fig. 2c. As a result, we can partly avoid spreading the graph across dissimilar pixels. This is essential when the local window covers the edges, where pixels on the two sides belonging to different targets. Fig. 2e illustrates an example, where ‘Road’ is on one side of the edge, while ‘Tree’ is on the other side. The SAD metric, defined in Eq. (6), is a suitable metric, since it is an essential estimation metric for endmembers [25, 38, 5, 26].
Through the above process, we can build the local weighted graph, or structure, with node at the center. In this way, the highly similar neighboring pixels are grouped together (see Figs. 2c and 2d). Repeating the above procedures for each pixel, we can construct the weight matrix that contains the manifold structures latent in hyperspectral data. Our goal is to transfer these structures to the learned abundance space. This could be done by first measuring the distance between abundance vectors and
(7) 
where denotes the norm of a vector, measuring the Euclidean distance. One intuitive way is to make small, if and are similar [35]. This can be achieved by
(8) 
where represents the trace of a matrix and is a diagonal matrix whose elements are column (or row, as is symmetric) sums of , i.e., . is called graph Laplacian [36, 39].
By minimizing Eq. (8), the manifold structures in the original data space are transferred to the learned abundance space. However, Eq. (8) has not considered the fact that most manifold structures in hyperspectral data are dominated by specific sets of only a few endmembers. Here a lasso penalty [40, 41] on the abundance matrix is adopted, resulting in a structured sparse constraint as
(9)  
(10) 
where is used to achieve an expressive set of endmembers [16]. Note that the structure regularization defined in Eq. (8) provides the lasso penalty with a structure constraint, under which the sparse abundance vectors related to the same structure tend to be similar. For example, and are in the same local structure (i.e., is big), then their corresponding sparse abundances and tend to be similar. In addition, controls how strongly and share similar abundances. This renders the structured sparse constraint defined in Eq. (10).
Given a hyperspectral data , similar to the NMF method, the objective function of SSNMF is defined as follows
(11) 
where are regularization parameters that control the complexity of the model, all elements in , are nonnegative and unknown. We will present the optimization method to solve the problem defined in Eq. (11) in the following subsection.
IiiB Optimization for SSNMF
Similar to the NMF problem, the objective function (11) is nonconvex for and together. An iterative algorithm, which could reach local minima, is introduced in this subsection. When considered the nonnegative constraint on and , the objective function (11) could be reformulated as
(12) 
Let , be the Laplacian multipliers for constraint and respectively, and , . The Lagrange is given by
(13) 
We can further obtain the partial derivative of with respect to and as
(14) 
(15) 
Based on the KarushKuhnTucker conditions and , we could obtain the following equations by letting the above partial derivatives equal to zero and multiplying both sides with and for Eqs. (14) and (15) respectively,
(16)  
(17) 
When the equation mentioned before is considered, Eq. (17) could be rewritten as
(18) 
Solving Eqs. (16) and (18), we have the updating rules as
(19) 
(20) 
Therefore, Eq. (12) could be solved by alternately updating and according to Eqs. (19) and (20) respectively. Our algorithm of SSNMF is summarized in Algorithm 1.
It is worthwhile to point out that if and form the solution for NMF algorithm, then and are the solution for any diagonal matrix with positive diagonal elements. To get rid of this uncertainty, a simple and widely used method is to scale each column of or each row of to be unit norm [18] (or norm). This can be achieved by,
(21) 
The proposed SSNMF also employs this strategy to eliminate the uncertainty, which is essential in the sense of computer realization.
IiiC Proof of Convergence
In this subsection, we demonstrate that the optimization problem Eq. (12) is nonincreasing by using the updating rules (19) and (20) during each iteration, and it finally converges to local minima after finite iterations. Since the updating rule for is the same as that of NMF and the convergence proof is provided by [29], we only focus on the proof of the updating rule (20) for
. A common skill used in the Expectation Maximization (EM) algorithm
[42, 43] is adopted by introducing an auxiliary function, which is exactly an upper bound function.Definition 1.
is an auxiliary function for if the following properties are satisfied:
By minimizing the energy of exactly given by
(22) 
we can get a suitable solution that makes nonincreasing
(23) 
Proof:
This is because of the following inequalities,
(24) 
∎
Now, the objective function defined in Eq. (11) is only related to variable . It could be represented by as
(25) 
Specifically, it is a quadratic function that equals to the Taylor expansion
(26) 
where . A function constituted based on the updating rule (20) is given by
(27) 
where
(28) 
This means that we could obtain the updating rule (20) by minimizing the function at the th iteration step.
Lemma 2.
Proof:
When , i.e. , is satisfied, we have . Specifically, when (i.e. ), we have to prove . Since the constant term and linear term in Eq.(26) are equal to their counterparts in Eq.(27), Lemma 2 could be proven by comparing the quadratic terms as
(29) 
Since is a diagonal matrix, Eq. (28) is simplified as
(30) 
The inequality (29) could be rewritten as
(31) 
which could be proven by proving both terms to be greater than or equal to zero.
To begin with, the first term in Eq. (31) is compared as
(32) 
Then we compare the second term in Eq. (31)
(33) 
When the equation is considered, could be rewritten as
(34) 
Because of the inequality,
Eq. (34) has the following relationship,
(35) 
Thus Eq. (33) could be relaxed as
(36) 
For simplicity, we ignore the impact of and obtain
(37) 
where and for . Because of the sparse distribution of the column vectors in and the sparse constraint on , there are always some vector very sparse, which means and . Besides, the values of and are positive and finite, we have . Mostly, it is straightforward that in (36). Thus, we prove the inequality (29) by proving in (32) and in (36). is an auxiliary function (upper bound function) for . ∎
IiiD Comparison with Gradient Descent Method
The Gradient Descent Method (GDM) [44] is a widely used optimization algorithm to find a local minimum for an objective function. In this subsection, we try to find the relationship between the updating rules defined in Eqs. (19), (20) and that of Gradient Descent Method. For the problem defined in (11), the basic updating rules for GDM are given by
(38) 
The parameters are the learning rates. For our problem, there are two conditions to ensure physical meaningful local minima obtained by the updating rules in (38). First, the values of and are relatively small to get local minima. Second, the values of and could ensure the nonnegative property of and during each iteration. One kind of choices to determine the learning rates and are as follows:
Parameters  Description 

the number of bands  
the number of endmembers  
the number of pixels in the hyperspectral image  
the number of pixels in the local window  
the percentage of nearest neighbors in the local window  
the number of iterations 
addition  multiplication  division  overall  

NMF  
SSNMF  

Let , we have

Let , we have
IiiE Computational Complexity Analysis
In this subsection, we compare the computational complexity of the proposed SSNMF with that of the NMF algorithm. Since the SSNMF and NMF algorithms are solved in an iteration way. We describe the complexity analysis in two steps. First, we analyze the computational complexity for each iteration; Second, the number of iteration steps are considered.
Besides the updating rules, SSNMF needs about to construct the structure relationships between pixel pairs. Suppose that totally iterations are needed to get a convergence result. The total computational cost for NMF is about
(39) 
The total computational cost for SSNMF is about
(40) 
As can be seen, the computational complexity of SSNMF is a bit more than that of NMF when both methods need iterations. However, we will show that the iterations needed by SSNMF are less than that of NMF in subsection IVH. In conclusion, the total computational complexity for SSNMF is very close to that of NMF.
Usually, the big notation [45] is used to analyze the complexity of an algorithm. To be clear, the precise arithmetic operations as well as the complexity analysis in the big notation for each iteration are summarized in Table II. Table I lists the parameters used in the complexity analysis. There are three kinds of arithmetic operations in each iteration: addition, multiplication and division. All of them are calculated in the floatpoint precision. Specifically, two tips are essential to get the results in Table II. For tip 1, the order of the matrix multiplication is important. Taking the matrix multiplication for example, there are two orders: and . The former approximately needs AM (Addition and Multiplication) and the latter costs AM (Addition and Multiplication). Since , we have , which means that is much more complex than . For tip 2, there are symmetry matrices in the computation. If the fact that is a symmetry matrix is considered, the multiplication of with costs about AM (Addition and Multiplication).
Iv Experiments
In this section, we investigate the effectiveness of the proposed SSNMF algorithm for the Hyperspectral Unmixing (HU) problem. Several experiments are carried out to show that our algorithm is well suited for the HU problem.
Iva Evaluation Metrics
In order to evaluate the proposed method, we adopt two performance metrics: the Spectral Angle Distance (SAD) and the Root Mean Square Error (RMSE), which are widely used by [25, 38, 26, 5]. The SAD is used to evaluate the performance of estimated endmembers, which is an angle distance between an estimated endmember and its corresponding ground truth. It is defined as follows
(41) 
where denotes the ground truth of one endmember, is the corresponding estimated result. The smaller SAD corresponds to a better performance. The Root Mean Square Error (RMSE) is used to evaluate the performance of estimated abundance maps. It is given by
(42) 
where is the number of pixels in the hyperspectral image, is the ground truth of one abundance map, denotes the corresponding estimated result. The smaller RMSE corresponds to a better performance. In the following subsections, the abundance map (i.e. ) will be showed in two visible ways: in pseudo color and in gray scale, as shown in Figs. 12, 13, 14 and 15.
IvB Compared Algorithms
We compare the proposed method with seven related methods on several data sets. The details of these evaluated algorithms (including our algorithm) are listed as follows

Our algorithm: Structured Sparse regularized Nonnegative Matrix Factorization (SSNMF in short) is a new algorithm proposed in this paper.

Vertex Component Analysis (VCA in short) [5] is a classic geometrical method that needs the existence of pure pixels for each endmember. Different with the other algorithms that estimate the endmembers and abundances simultaneously, VCA can only estimate the endmembers. The abundances are estimated by solving a constrained Least Square Problem [46]. The code for this algorithm is obtained from “http://www.lx.it.pt/bioucas/code.htm”.

Nonnegative Matrix Factorization (NMF in short) [13] is a typical statistical method. Due to the nonnegative constraint, which could be seen as a special case of sparsity constraint [25], NMF tends to get partsbased results. The code for this method is downloaded from “http://www.ee.columbia.edu/grindlay/code.html”.

Nonnegative sparse coding (NMF in short) is a popular sparse regularized NMF method proposed by [47]. The code is available on “http://www.cs.helsinki.fi/u/phoyer/software.html”.

sparsityconstrained Nonnegative Matrix Factorization (NMF in short) is a stateoftheart algorithm for the HU problem. It is proposed by [25]. Since the code is unavailable on the Internet, we implement it by ourself.

Graph regularized Nonnegative Matrix Factorization (GNMF in short), proposed by [39], is a good algorithm to extract graph information latent in data and transfer it to the low dimension representation space in the matrix factorization process. The code is obtained from “http://www.cad.zju.edu.cn/home/dengcai/Data/GNMF.html”.

Local Neighborhood Weights regularized NMF (WNMF in short) [48] is a graph based NMF method. The main contribution of this method is that it integrates the spectral and spatial information when constructing the weighted graph. Since the code implemented by the author has been lost, we realize it by ourself.

Endmember Dissimilarity Constrained NMF (EDCNMF in short) [49] is different from the listed methods. It is the only method that utilizes a constraint from the endmember prior. With this constraint, it would be highly possible for EDCNMF to find a result whose endmember spectra are smooth and different from each other. The code is implemented by ourself.
Among the above eight algorithms, VCA is a geometrical method, the other seven algorithms belong to statistical ones. There is no parameter in the NMF and VCA methods. In the next subsection, the methods to set parameters for NMF, NMF, GNMF, WNMF and SSNMF algorithms are introduced.
IvC Parameter Settings
There are two essential parameters in the proposed SSNMF method: controls the lasso penalty and controls the structured constraint. In this subsection, we introduce the methods to set and respectively.
The value of is closely related to the sparsity of the abundance matrix . Unfortunately, is unknown. Here we adopt a method to estimate by calculating the sparsity value of hyperspectral data, which is based on the metric [50, 25] given by
(43) 
where is the image in the th channel of hyperspectral data. To improve the accuracy, we set by searching the range at 50 equally spaced values. The parameters in NMF and NMF are estimated in the same way.
One intuitive thought is that the value of is closely related to the level of similarities between pixel pairs in the neighborhood. Thus, the value of is estimated in two steps. First, we randomly select local patches with pixels respectively and compute the similarities between each central pixel and its neighboring ones. Second, is set by averaging all these values. To improve the accuracy, we set by searching the range at 50 equally spaced values. The parameter to control the strength of the graph constraint in GNMF and WNMF is set similarly.
IvD Real Data Sets
We use two popular hyperspectral data sets [25, 38, 51], as shown in Fig. 3, to evaluate the proposed method. The details of these experiment data sets are introduced in this subsection.
Urban Data, available at http://www.tec.army.mil/Hypercube, is one of the most widely used hyperspectral data set for the HU research [25, 26]. It was recorded by the Hyperspectral Digital Imagery Collection Experiment (HYDICE) in October 1995, whose location is an urban area at Copperas Cove, TX, U.S. There are pixels in this image. Each pixel, corresponding to a area, is observed at wavelengths ranging from to , with a spectral resolution of . After the bands –, , , –, – and – are removed (due to dense water vapor and atmospheric effects), bands are remained in this data. There are four endmembers in this data: #1 Asphalt, includes the road, the parking area and a few roofs; #2 Grass covers areas with a green appearance in Fig. 3a; #3 Tree, owning a different spectral signature with that of grass, appears dark green mainly in the bottom areas in Fig. 3a; #4 Roof, appears white in the center in Fig. 3a.
Jasper Ridge is a popular hyperspectral data set introduced in [51]. There are pixels in it. Each pixel is observed at bands covering the wavelengths ranging from to . The spectral resolution is . As the original hyperspectral data set is too complex to get the Ground Truth, we consider a subimage with pixels. The first pixel corresponds to the pixel in the original image. After removing bands –, –, – and – (due to dense water vapor and atmospheric effects), we remain bands (this is a common preprocess for the HU analysis). There are four endmembers latent in data: #1 Road, #2 Soil, #3 Water and #4 Tree, as shown in Fig. 3b.
The ground truths for both data sets are determined by using the method proposed by [52, 38]. There are three steps. First, the VD method, proposed by [53], is adopted to determine the endmember number. Second, the endmember spectra are manually chosen from hyperspectral data. These spectra have to enjoy high similarity with the reference hyperspectral spectra, which are supplied by the USGS mineral spectral library ^{3}^{3}3Available on http://speclab.cr.usgs.gov/spectrallib.html. etc. Given the endmembers, we can get the corresponding abundances by solving a constraint convex optimization problem, which can be easily implemented by using the Matlab Optimization Toolbox. The ground truths of the listed data sets are showed in the last column of Figs. 12, 13, 14 and 15.
IvE Performance Evaluation
To test the robustness, this subsection evaluates the influence of noise on the HU performances for all methods. We choose the i.i.d. zeromean white Gaussian noise rather than the correlated noise for two reasons. First, without any prior, the correlated noise is very hard to add to the hyperspectral image. Second, the zeromean white Gaussian noise is the most widely used noise in the HU study [25, 26, 49].
For each hyperspectral data set, seven experiments are carried out with respect to seven levels of Gaussian noise (i.e., SNR, , , , , , dB). Each experiment is repeated 50 times, and the mean results as well as their corresponding standard deviations are provided here. The evaluation is organized in two parts: Quantitative Results and Qualitative Results.
IvE1 Quantitative Results and Analysis
Figs. 4, 5 and Tables III, IV show the experiment results on the Urban data set. Figs. 4 and 5 illustrate the plots of the SAD (metric for estimated endmembers) and the RMSE (metric for estimated abundances) versus seven SNR levels of Gaussian noise respectively. The results of our proposed SSNMF algorithm are consistently better than or comparable to all the other methods. Especially, SSNMF achieves significantly better performances in Figs. 4b, 4c and 4d.
Tables III and IV show the detailed SAD and RMSE performances and their corresponding standard deviations versus seven levels of Gaussian noise. The values in Table III are the average of the statistics plotted in four subfigures in Fig. 4. Taking the value “5.7” marked in bold at the northeast corner of Table III for example, it is the average of the four SAD values of the SSNMF algorithm under the ‘’ noise level condition in Fig. 4. Similarly, Table IV illustrates the average RMSE values in four subfigures in Fig. 5. From the above results, we observe that

For all the seven noise levels, our SSNMF gets the best or comparable performances. Comparing to the best algorithms other than our proposed SSNMF algorithm, i.e. NMF for SAD and EDCNMF for RMSE respectively, SSNMF achieves decrement for SAD, as shown in Table III, and decrement for RMSE, as shown in Table IV. Moreover, when the level of noise varies from SNR= to SNR=, our method is the most stable one. Those all prove that the structured sparse regularization is well suited for the HU problem.

Comparing with the results obtained by GNMF [39] and WNMF [48], the sparse regularized NMF methods (i.e., NMF [47], NMF [25]) achieve better or comparable results. This demonstrates that the sparse constraint is more important than the graph constraint when estimating endmembers. This is mainly because the graph constraint helps to find a new abundance space, in which similar pixels share similar abundances. Hence it leads to smooth or hazy abundances, which is closely related to less expressive [16] or inaccurate endmembers by the iterative updating rules introduced by [13, 39].

NMF and its extensions (SSNMF and NMF [47], NMF [25], GNMF [39]) outperform the VCA algorithm significantly. There are two reasons for this: (1) NMF based methods overcome the requirement of pure pixels, which is essential for the VCA method. (2) The partsbased property of NMF helps to discover latent endmembers and their corresponding abundances.
Figure 7: RMSEs of four abundance maps vs. SNRs on the Jasper Ridge data: (a) Road, (b) Soil, (c) Water and (d) Tree. The symbol ‘’ at the bottomleft of each subfigure indicates that no Gaussian noise is added manually.
Figs. 6, 7 and Tables V, VI show the experiment results on the Jasper Ridge data. The HU results and their standard deviations versus seven levels of noise are summarized in Tables V, VI. Graphical plots are shown in Figs. 6, 7. As we can see, our proposed SSNMF are consistently better than or comparable to all the other methods. In particular, Figs. 6a, 7a, 7b and Tables V, VI show that SSNMF outperform the other algorithms significantly. Comparing to the best algorithm other than our proposed algorithm, i.e. NMF for SAD and NMF for RMSE respectively, SSNMF achieves 30.68% decrement for SAD (metric for endmember) in Table V and 35.34% decrement for RMSE (metric for abundance) in Table VI.
IvE2 Qualitative Results and Analysis
In order to give an intuitive comparison of the HU results, we illustrate the abundance maps on Urban data in Figs. 12, 13 and the abundance maps on Japer Ridge data in Figs. 14, 15. There are two ways to show the abundance maps: in pseudo color (as shown in Figs. 12, 13a, 14, and 15a) and in gray scale (as shown in Figs. 13b and 15b). Taking the last subfigure in Fig. 13a for example, we illustrate fractional abundances associated with pixel by plotting the corresponding pixel using proportions of Red, Blue, Green and Black ink given by for , respectively [54]. So, for instance, a pixel for which will be colored red, whereas one for which will be colored with equal proportions of Red and Blue ink and so will appear Purple. The Figs. 12, 14, and 15a are achieved in this way. The gray scale way is straightforward. Taking the subfigures in the last column in Fig. 13b for example, we illustrate fractional abundances associated with pixel by plotting the corresponding pixels in the four subfigures with different gray scale appearances given by for respectively. So, for instance, a pixel for which will be white in the first subfigure and black in the other three subfigures.
Comparing the results in Figs. 12, 13, 14 and 15, we have

SSNMF achieves the most similar abundance maps according to the Ground Truths. In addition, the colors in our results are regional smooth and sparsely mixed by Red, Blue, Green and Black ink. These demonstrate that the structured sparse constraint is effective and meaningful.

Abundance maps estimated by the sparse regularized NMF methods (NMF, NMF) contain more noise than those of SSNMF and GNMF. The reason is that the graph structure constraint is a kind of smooth constraint, which could urge the learnt abundance maps to be smooth.
(a) Urban data. (b) Jasper Ridge data. Figure 8: Performance vs. the varying parameters : (a) on Urban data, (b) on Jasper Ridge data. The first column shows the s, while the second column displays the s. The symbol ‘’ at the bottomright of each subfigure indicates equal to times of the values of the best parameter setting respectively.
IvF The Performance versus varying parameters
In this subsection, we evaluate the impact of varying regularized parameters upon the performances. Nine experiments are conducted with respect to nine different regularized parameters on the hyperspectral images, which are not degraded by manually adding Gaussian noise. The regularized parameters vary in the following way: , , , , , , , , and , , , , , , , , . Here the values of are the ones that help SSNMF to get the best results on each data set. Obviously, they are different with respect to the two data sets. To reduce randomness of the results, each experiment is repeated ten times and the mean results are provided.
Fig. 8 shows the graphical performances: (a) on Urban data, (b) on Jasper Ridge data. For simplicity, the average performances are provided for each hyperspectral image: on the left side and on the right side. Taking the Fig. 8a for example, each value in the left subfigure is achieved by performing an average calculation of the SADs of all estimated endmembers; Each value in the right subfigure is obtained by averaging the RMSEs of all estimated abundance maps. There is no regularized parameter in NMF, leading to the graphical results of NMF to be plain. From the results in Fig. 8, we have

As the regularized parameters vary from to , the performances of SSNMF increase first and then decrease. The proposed SSNMF gets good performance in the interval of , . Especially, SSNMF outperforms the other algorithms greatly under suitable parameter setting .

It seems NMF, NMF and GNMF are more robust to the varying regularized parameters than our SSNMF method. There might be two reasons for this phenomenon. First, the results of NMF, NMF and GNMF are much worse than that of our method. That is, the varying regularized parameters cannot make these algorithms to get much better or much worse results under different parameter conditions. Second, although our SSNMF method could achieve significantly better results than the other methods, it achieves bad results under extreme parameter settings.
IvG Influences of Weighting Methods
There are many weighting methods. Three common ones are considering the feature distance only, the spatial distance only and the spatial and feature distances simultaneously. The second and the third methods are well suited for hyperspectral images. The reason is that hyperspectral images enjoy the geometrically intuitive property, that is, the pixels are located on the grids of a 2D plane, which means the spatial neighborhood is inherent for each pixel. In the previous experiments, we employ the third weighting method, which treats the pixels among the 30% biggest SADs in the local windows as nearest neighbors. In this subsection, the HU performances of the second and the third weighting methods are thoroughly compared.
Figs. 9 and 10 show the abundance maps obtained by the SSNMF algorithm with different weighting methods: (a) considering the spatial distance only and (b) considering the spatial and feature distances simultaneously. This experiment is conducted on the hyperspectral images, which are not degraded by manually adding Gaussian noise. As Figs. 9a and 10a show, the results of (a) weighting method are too smooth and hazy. The reason is that only considering the spatial distance easily urges a lot of pixels with dissimilar spectral signatures to be connected on the graph. Accordingly, minimizing the objective function transfers these graph constraints to the abundance space. Besides, this part of improper constraints would confuse the optimization algorithm when solving the nonconvex problem, resulting in bad minima.
IvH Convergence Study
The objective function for SSNMF has been proved nonincreasing under the updating rules (19) and (20). In this subsection, we study the convergence rate and convergence time for the proposed algorithm. Fig. 11 illustrates the convergence curves of NMF and SSNMF on the two hyperspectral data sets: (a) on Urban data set and (b) on Jasper Ridge data set. In each subfigure, the axis shows the number of iterations and the axis displays the energies of the objective functions. As can be seen, the updating rules for both algorithms are efficient, usually within 50 iterations; SSNMF converges a bit faster, converging within 30 iterations.
The convergence time, measured in seconds, for NMF and SSNMF is summarized in Table VII. There are three rows. The first row shows the time of constructing weighted graphs. Since NMF does not need this part, the time of constructing graphs is zero for this method, which is denoted by “–” in the table. The second row shows the time of iteration; The total convergence time is summarized in the last row. Comparing the results in Table VII, we have

The SSNMF method costs less time than the NMF method does during the iteration process. The reason is that SSNMF needs less iterations than NMF does before convergence, as shown in Fig. 11.

Although NMF costs less convergence time than SSNMF does on the Urban data, it fails to continue this superiority on the Jasper Ridge data. The reason might be that the size of Jasper Ridge data is much smaller than that of the urban data, approximately . It leads to the ratio of to be smaller on the Jasper Ridge data than that on the Urban data, i.e., and respectively.
(a) Urban (b) Jasper Ridge Figure 11: Convergence curves of NMF and SSNMF. Time () Urban data set Jasper Ridge data set NMF SSNMF NMF SSNMF – – Table VII: The time (in second) of constructing graph & iteration & convergence on the two data sets.
V Conclusion
Based on the observed properties of hyperspectral data, we propose an effective method for the HU problem by imposing a Structured Sparse constraint in the NMF framework, which is called SSNMF for short. SSNMF can effectively overcome the three limitations of NMF for HU problem. With the structured sparse constraint, SSNMF not only transfers the manifold structures inherently embedded in the original data space to the learned abundance space, but also learns an expressive and accurate set of endmembers. Experiments on several hyperspectral data sets show that our method achieves better results than the stateoftheart methods in the sense of both quantitative and qualitative performances. Besides, our method is relatively robust to different noise levels.
References
 [1] N. Keshava, “A survey of spectral unmixing algorithms,” Lincoln Laboratory Journal, vol. 14, no. 1, pp. 55–78, Jan 2003.
 [2] L. Tits, W. D. Keersmaecker, B. Somers, G. P. Asner, J. Farifteh, and P. Coppin, “Hyperspectral shapebased unmixing to improve intra and interclass variability for forest and agroecosystem monitoring,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 74, no. 0, pp. 163 – 174, 2012.
 [3] N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Transactions on Signal Processing (TSP), vol. 19, no. 1, pp. 44–57, 2002.
 [4] BioucasDias et al., “Hyperspectral unmixing overview: Geometrical, statistical, and sparse regressionbased approaches,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 354 –379, april 2012.
 [5] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 43, no. 4, pp. 898–910, 2005.
 [6] C.I. Chang, C.C. Wu, W. Liu, and Y. C. Ouyang, “A new growing method for simplexbased endmember extraction algorithm,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 44, no. 10, pp. 2804–2819, 2006.
 [7] G. Martin and A. Plaza, “Spatialspectral preprocessing prior to endmember identification and unmixing of remotely sensed hyperspectral data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 380–395, 2012.
 [8] N. Dobigeon, S. Moussaoui, M. Coulon, J.Y. Tourneret, and A. O. Hero, “Joint bayesian endmember extraction and linear unmixing for hyperspectral imagery,” IEEE Transactions on Signal Process (TSP), vol. 57, no. 11, pp. 4355–4368, 2009.

[9]
J. Wang and C.I. Chang, “Applications of independent component analysis in endmember extraction and abundance quantification for hyperspectral imagery,”
IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 44, no. 9, pp. 2601–2616, 2006.  [10] J. M. BioucasDias, “A variable splitting augmented lagrangian approach to linear spectral unmixing,” in Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2009, pp. 1–4.
 [11] J. M. P. Nascimento and J. M. BioucasDias, “Hyperspectral unmixing based on mixtures of dirichlet components,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 50, no. 3, pp. 863–878, 2012.
 [12] M. E. Winter, “Nfindr: an algorithm for fast autonomous spectral endmember determination in hyperspectral data,” in SPIE Conference Imaging Spectrometry, 1999, pp. 266–275.
 [13] D. D. Lee and H. S. Seung, “Learning the parts of objects with nonnegative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, Oct 1999.
 [14] S. E. Palmer, “Hierarchical structure in perceptual representation,” Elsevier Journal of Cognitive Psychology, vol. 9, no. 4, pp. 441 – 474, 1977.
 [15] N. K. Logothetis and D. L. Sheinberg, “Visual object recognition,” Annual Review of Neuroscience, vol. 19, no. 1, pp. 577–621, 1996.

[16]
S. Z. Li, X. Hou, H. Zhang, and Q. Cheng, “Learning spatially localized,
partsbased representation,” in
IEEE International Conference on Computer Vision (CVPR)
, 2001, pp. 207–212.  [17] R. Sandler et al., “Nonnegative matrix factorization with earth mover’s distance metric for image analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 33, no. 8, pp. 1590–1602, 2011.
 [18] W. Xu, X. Liu, and Y. Gong, “Document clustering based on nonnegative matrix factorization,” in International Conference on Research and Development in Information Retrieval (SIGIR), 2003, pp. 267–273.
 [19] F. Shahnaz, M. W. Berry, V. P. Pauca, and R. J. Plemmons, “Document clustering using nonnegative matrix factorization,” Elsevier Journal Information Processing & Management, vol. 42, no. 2, pp. 373–386, 2006.
 [20] A. D. PascualMontano et al., “Nonsmooth nonnegative matrix factorization (nsnmf),” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 28, no. 3, pp. 403–415, march 2006.
 [21] L. Miao, H. Qi, and H. Qi, “Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 45, no. 3, pp. 765–777, 2007.
 [22] A. Huck and M. Guillaume, “Robust hyperspectral data unmixing with spatial and spectral regularized nmf,” pp. 1–4, 2010.
 [23] X. Lu, H. Wu, Y. Yuan, P. Yan, and X. Li, “Manifold regularized sparse nmf for hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 51, no. 5, pp. 2815–2826, 2013.
 [24] M.D. Iordache, J. M. BioucasDias, and A. Plaza, “Sparse unmixing of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 49, no. 6, pp. 2014–2039, 2011.
 [25] Y. Qian, S. Jia, J. Zhou, and A. RoblesKelly, “Hyperspectral unmixing via sparsityconstrained nonnegative matrix factorization,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 49, no. 11, pp. 4282 –4297, nov 2011.
 [26] X. Liu, W. Xia, B. Wang, and L. Zhang, “An approach based on constrained nonnegative matrix factorization to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 49, no. 2, pp. 757–772, 2011.
 [27] X. Chen, S. Kim, Q. Lin, J. G. Carbonell, and E. P. Xing, “Graphstructured multitask regression and an efficient optimization method for general fused lasso,” Computing Research Repository (CORR), vol. abs/1005, no. 3, pp. 1–21, 2010.
 [28] X. Geng, Z. Xiao, L. Ji, Y. Zhao, and F. Wang, “A gaussian elimination based fast endmember extraction algorithm for hyperspectral imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, no. 0, pp. 211 – 218, 2013.
 [29] D. D. Lee and H. S. Seung, “Algorithms for nonnegative matrix factorization,” in Advances in Neural Information Processing Systems (NIPS). MIT Press, 2000, pp. 556–562.
 [30] H. Kim and H. Park, “Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method,” SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 2, pp. 713–730, 2008.
 [31] M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, and R. J. Plemmons, “Algorithms and applications for approximate nonnegative matrix factorization,” Computational Statistics & Data Analysis, vol. 30, no. 1, pp. 155–173, 2007.
 [32] C.J. Lin, “Projected gradient methods for nonnegative matrix factorization,” MIT Press Neural Computation, vol. 19, no. 10, pp. 2756–2779, 2007.
 [33] A. Gersho and R. M. Gray, Vector quantization and signal compression. Norwell, MA, USA: Kluwer Academic Publishers, 1991.
 [34] I. T. Jolliffe, Principal Component Analysis, 2nd ed. New York, USA: Springer, 2002.
 [35] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Advances in Neural Information Processing Systems (NIPS). MIT Press, 2001, pp. 585–591.
 [36] F. R. K. Chung, Spectral Graph Theory. American Mathematical Society, Dec. 1996.
 [37] S. Xiang, F. Nie, and C. Zhang, “Semisupervised classification via local spline regression,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 32, no. 11, pp. 2039–2053, 2010.
 [38] S. Jia and Y. Qian, “Constrained nonnegative matrix factorization for hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 47, no. 1, pp. 161–173, 2009.
 [39] D. Cai, X. He, J. Han, and T. S. Huang, “Graph regularized nonnegative matrix factorization for data representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 33, no. 8, pp. 1548 –1560, aug 2011.
 [40] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society, vol. 58, no. 1, pp. 267–288, 1996.
 [41] D. L. Donoho, “Compressed sensing,” IEEE Transactions Information Theory, vol. 52, no. 4, pp. 1289–1306, 1996.
 [42] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of the Royal Statistical Society, vol. 39, no. 1, pp. 1–38, 1977.

[43]
L. Saul and F. Pereira, “Aggregate and mixedorder markov models for statistical language processing,” in
Conference on Empirical Methods in Natural Language Process
, 1997, pp. 81–89.  [44] J. Kivinen and M. K. Warmuth, “Additive versus exponentiated gradient updates for linear prediction,” Journal of Information and Computational Science, vol. 132, no. 1, pp. 1–64, 1997.
 [45] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (Second Edition). MIT Press, 2001.
 [46] D. C. Heinz and C.I. Chang, “Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 39, no. 3, pp. 529–545, 2001.

[47]
P. O. Hoyer, “Nonnegative sparse coding,” in
IEEE Workshop Neural Networks for Signal Processing
, 2002, pp. 557–565.  [48] J. Liu, J. Zhang, Y. Gao, C. Zhang, and Z. Li, “Enhancing spectral unmixing by local neighborhood weights,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 5, pp. 1545–1552, 2012.
 [49] N. Wang, B. Du, and L. Zhang, “An endmember dissimilarity constrained nonnegative matrix factorization method for hyperspectral unmixing,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 2, pp. 554–569, 2013.

[50]
P. O. Hoyer and P. Dayan, “Nonnegative matrix factorization with sparseness
constraints,”
Journal of Machine Learning Research (JML)
, vol. 5, no. 12, pp. 1457–1469, 2004.  [51] EnviTutorials, “Envi classic vegetation hyperspectral analysis. http://www.exelisvis.com/Learn/WhitepapersDetail/TabId/802/PID/2627/TagID/129/TagName/hyperspectralimagery/Default.aspx,” 2013.
 [52] S. Jia and Y. Qian, “Spectral and spatial complexitybased hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 45, no. 12, pp. 3867–3879, 2007.
 [53] C.I. Chang and Q. Du, “Estimation of number of spectrally distinct signal sources in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing (TGRS), vol. 42, no. 3, pp. 608–619, 2004.
 [54] C. M. Bishop, Pattern Recognition and Machine Learning, ser. Information Science and Statistics. Springer, 2006.
Comments
There are no comments yet.