Nonlinear spectral unmixing of hyperspectral images using Gaussian processes

07/23/2012
by   Yoann Altmann, et al.
Heriot-Watt University
ENSEEIHT
0

This paper presents an unsupervised algorithm for nonlinear unmixing of hyperspectral images. The proposed model assumes that the pixel reflectances result from a nonlinear function of the abundance vectors associated with the pure spectral components. We assume that the spectral signatures of the pure components and the nonlinear function are unknown. The first step of the proposed method consists of the Bayesian estimation of the abundance vectors for all the image pixels and the nonlinear function relating the abundance vectors to the observations. The endmembers are subsequently estimated using Gaussian process regression. The performance of the unmixing strategy is evaluated with simulations conducted on synthetic and real data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

page 20

page 24

page 25

10/20/2015

Online Unmixing of Multitemporal Hyperspectral Images accounting for Spectral Variability

Hyperspectral unmixing is aimed at identifying the reference spectral si...
01/15/2018

Hyperspectral recovery from RGB images using Gaussian Processes

Hyperspectral cameras preserve the fine spectral details of scenes that ...
03/18/2015

Nonparametric Detection of Nonlinearly Mixed Pixels and Endmember Estimation in Hyperspectral Images

Mixing phenomena in hyperspectral images depend on a variety of factors ...
10/14/2015

Dynamical spectral unmixing of multitemporal hyperspectral images

In this paper, we consider the problem of unmixing a time series of hype...
02/22/2019

A laboratory-created dataset with ground-truth for hyperspectral unmixing evaluation

Spectral unmixing is an important and challenging problem in hyperspectr...
04/14/2016

Unsupervised Nonlinear Spectral Unmixing based on a Multilinear Mixing Model

In the community of remote sensing, nonlinear mixing models have recentl...
10/22/2016

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification using Markov Random Fields

In this paper, we propose and compare two spectral angle based approache...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Spectral unmixing (SU) is a major issue when analyzing hyperspectral images. It consists of identifying the macroscopic materials present in an hyperspectral image and quantifying the proportions of these materials in the image pixels. Many SU strategies assume that pixel reflectances are linear combinations of pure component spectra [Bioucas2012]

. The resulting linear mixing model (LMM) has been widely adopted in the literature and has provided some interesting results. However, as discussed in

[Bioucas2012], the LMM can be inappropriate for some hyperspectral images, such as those containing sand, trees or vegetation areas. Nonlinear mixing models provide an interesting alternative to overcome the inherent limitations of the LMM. Nonlinear mixing models recently proposed in the literature include the bidirectional reflectance-based model of [Hapke1981] for hyperspectral images including intimate mixtures, i.e., when the photons are intereacting with all the materials simultaneously. Such mixtures may occur for instance in sand or mineral areas. Another class of nonlinear model referred to as bilinear models have been studied in [Somers2009, Nascimento2009, Fan2009, Halimi2010]

for modeling scattering effects (mainly observed in vegetation areas). Other more flexible unmixing techniques have been also proposed to handle wider classes of nonlinearity, including radial basis function networks

[Guilfoyle2001, Altmann2011IGARSS] post-nonlinear mixing models [Altmann2012] and kernel-based models [Broadwater2007, Liu2009, Chen2011, Chen2012].

Most existing unmixing strategies can be decomposed into two steps referred to as endmember extraction and abundance estimation. Endmember identification is usually achieved before estimating the abundances for all the image pixels. In the last decade, many endmember extraction algorithms (EEAs) have been developed to identify the pure spectral components contained in a hyperspectral image (see [Bioucas2012] for a recent review of these methods). Most EEAs rely on the LMM, which, as discussed, is inappropriate for the case of nonlinear mixtures of the endmembers. More recently, an EEA was proposed in [Heylen2011] to extract endmembers from a set of nonlinearly mixed pixels. This paper proposes first to estimate the abundance vectors and to estimate the endmembers during a second step, using the prediction capacity of Gaussian processes (GPs). This approach breaks from the usual paradigm of spectral unmixing. More precisely, this paper considers a kernel-based approach for nonlinear SU based on a nonlinear dimensionality reduction using a Gaussian process latent variable model (GPLVM). The main advantage of GPLVMs is their capacity to accurately model many different nonlinearities. In this paper, we propose to use a particular form of kernel based on existing bilinear models, which allows the proposed unmixing strategy to be accurate when the underlying mixing model is bilinear. Note that the LMM is a particular bilinear model. The algorithm proposed herein is “unsupervised” in the sense that the endmembers contained in the image and the mixing model are not known. Only the number of endmembers is assumed to be known. As a consequence, the parameters to be estimated are the kernel parameters, the endmember spectra and the abundances for all image pixels.

The paper is organized as follows. Section II presents the nonlinear mixing model considered in this paper for hyperspectral image unmixing. Section III introduces the GPLVM used for latent variable estimation. The constrained GPLVM for abundance estimation is detailed in Section IV. Section V studies the endmember estimation procedure using GP regression. Some simulation results conducted on synthetic data are shown and discussed in Section VI. Finally, conclusions are drawn in Section VII.

Ii Nonlinear mixing model

Consider a hyperspectral image of pixels, composed of endmembers and observed in spectral bands. For convenience, the data are assumed to have been previously centered. The -spectrum of the th mixed pixel () is defined as a transformation of its corresponding abundance vector as follows

(1)

where is a linear or nonlinear unknown function. The noise vector

is an independent, identically distributed (i.i.d.) white Gaussian noise sequence with variance

, i.e., . Without loss generality, the nonlinear mapping (1) from the abundance space to the observation space can be rewritten

(2)

where , is an matrix and the dimension is the dimension of the subspace spanned by the transformed abundance vectors . Of course, the performance of the unmixing strategy relies on the choice of the nonlinear function . In this paper, we will use the following nonlinearity

(3)

with . The primary motivation for considering this particular kind of nonlinearity is the fact that the resulting mixing model is a bilinear model with respect to each abundance . More precisely, this mixing model extends the generalized bilinear model proposed in [Halimi2010] and thus the LMM. It is important to note from (2) and (3) that contains the spectra of the pure components present in the image and interaction spectra between these components. Note also that the analysis presented in this paper could be applied to any other nonlinearity .

Due to physical constraints, the abundance vector satisfies the following positivity and sum-to-one constraints

(4)

Since the nonlinearity is fixed, the problem of unsupervised spectral unmixing is to determine the spectrum matrix , the abundance matrix satisfying (2) under the constraints (4) and the noise variance . Unfortunately, it can be shown that the solution of this constrained problem is not unique. In the noise-free linear case, it is well known that the data are contained in a simplex whose vertices are the endmembers. When estimating the endmembers in the linear case, a simplex of minimum volume embedding the data is expected. Equivalently, the estimated abundance vectors are expected to occupy the largest volume in the simplex defined by (4). In a similar fashion to the linear case, the estimated abundance matrix resulting from an unsupervised nonlinear SU strategy may not occupy the largest volume in the simplex defined by (4). To tackle this problem, we first propose to relax the positivity constraints for the elements of the matrix and to consider only the sum-to-one constraint. For ease of understanding, we introduce vectors satisfying the sum-to-one constraint

(5)

referred to as latent variables and denoted as . The positivity constraint will be handled subsequently by a scaling procedure discussed in Section IV. The next section presents the Bayesian model for latent variable estimation using GPLVMs.

Iii Bayesian model

GPLVMs [Lawrence2003] are powerful tools for probabilistic nonlinear dimensionality reduction that rewrite the nonlinear model (1) as a nonlinear mapping from a latent space to the observation space as follows

(6)

where is defined in (3), is an matrix with , and . Note that from (2) and (6) the columns of span the same subspace as the columns of . Consequently, the columns of are linear combinations of the spectra of interest, i.e., the columns of . Note also that when is full rank, it can be shown that the latent variables are necessarily linear combinations of the abundance vectors of interest. Figs. 1 and 2 illustrate the mapping from the abundance vectors to the observations that will be used in this paper. Note that the linear mapping between the abundances and the latent variables will be explained in detail in Section IV. For brevity, the vectors will be denoted as in the sequel. Assuming independence between the observations, the statistical properties of the noise lead to the following likelihood of the observation matrix

(7)

where is the

latent variable matrix. Note that the likelihood can be rewritten as a product of Gaussian distributions over the spectral bands as follows

(8)

where and is an matrix. The idea of GPLVMs is to consider as a nuisance parameter, to assign a Gaussian prior to and to marginalize the joint likelihood (7) over , i.e.,

(9)

where is the prior distribution of . The estimation of and can then be achieved by maximizing (9) following the maximum likelihood estimator (MLE) principle. An alternative consists of using an appropriate prior distribution , assuming prior independence between and , and maximizing the joint posterior distribution

(10)

with respect to (w.r.t.) , yielding the maximum a posteriori (MAP) estimator of . The next paragraph discusses different possibilities for marginalizing the joint likelihood (8) w.r.t. .

Fig. 1: Nonlinear mapping from the abundances vectors to the observed mixed pixels.
Fig. 2: Example of mapping decomposition from the abundance vectors to the observed nonlinearly mixed pixels through the latent variables ().

Iii-a Marginalizing

It can be seen from (9) that the marginalized likelihood and thus the associated latent variables depend on the choice of the prior . More precisely, assigning a given prior for favors particular representations of the data, i.e., particular solutions for the latent variable matrix maximizing the posterior (10). When using GPLVMs for dimensionality reduction, a classical choice [Lawrence2003] consists of assigning independent Gaussian priors for , leading to

(11)

However, this choice can be inappropriate for SU. First, Eq. (11) can be incompatible with the admissible latent space, constrained by (5). Second, the prior (11) assumes the columns of (linear combinations of the spectra of interest) are a priori Gaussian, which is not relevant for real spectra in most applications. A more sophisticated choice consists of considering a priori correlation between the columns (inter-spectra correlation) and rows (inter-bands correlation) of using a structured covariance matrix to be fixed or estimated. In particular, introducing correlation between close spectral bands is of particular interest in hyperspectral imagery. Structured covariance matrices have already been considered in the GP literature for vector-valued kernels [Bonilla07] (see [AlvarezRL11arxiv] for a recent review). However, computing the resulting marginalized likelihood usually requires the estimation of the structured covariance matrix and the inversion of an covariance matrix111See technical report [AltmannTR2012] for further details., which is prohibitive for SU of hyperspectral images since several hundreds of spectral bands are usually considered when analyzing real data. Sparse approximation techniques might be used to reduce this computational complexity (see [Quinonero-candela05] for a recent review). However, to our knowledge, these techniques rely on the inversion of matrices bigger than matrices. The next section presents an alternative that only requires the inversion of an covariance matrix without any approximation.

Iii-B Subspace identification

It can be seen from (6) that in the noise-free case, the data belong to a -dimensional subspace that is spanned by the columns of . To reduce the computational complexity induced by the marginalization of the matrix while considering correlations between spectral bands, we propose to marginalize a basis of the subspace spanned by instead of itself. More precisely, can be decomposed as follows

(12)

where is an matrix ( is vector) whose columns are arbitrary basis vectors of the -dimensional subspace that contains the subspace spanned by the columns of and is a matrix that scales the columns of . Note that the subspaces spanned by and are the same when is full rank, resulting in a full rank matrix . The joint likelihood (8) can be rewritten as

(13)

where is an matrix. The proposed subspace estimation procedure consists of assigning an appropriate prior distribution to (denoted as ) and to marginalize from the joint posterior of interest. It is easier to choose an informative prior distribution that accounts for correlation between spectral bands than choosing an informative since is an arbitrary basis of the subspace spanned by , which can be easily estimated (as will be shown in the next section).

Iii-C Parameter priors

GPLVMs construct a smooth mapping from the latent space to the observation space that preserves dissimilarities [Lawrence2006]. In the SU context, it means that pixels that are spectrally different have different latent variables and thus different abundance vectors. However, preserving local distances is also interesting: spectrally close pixels are expected to have similar abundance vectors and thus similar latent variables. Several approaches have been proposed to preserve similarities, including back-constraints [Lawrence2006], dynamical models [Wang2006] and locally linear embedding (LLE) [Urtasun2007]. In this paper, we use LLE to assign an appropriate prior to . First, the nearest neighbors of each observation vector are computed using the Euclidian distance ( is the set of integers such that is a neighbor of ). The weight matrix of size providing the best reconstruction of from its neighbors is then estimated as

(14)

Note that the solution of (14) is easy to obtain in closed form since the criterion to optimize is a quadratic function of . Note also that the matrix is sparse since each pixel is only described by its nearest neighbors. The locally linear patches obtained by the LLE can then be used to set the following prior for the latent variable matrix

(15)

where

is a fixed hyperparameter to be adjusted and

is the indicator function over the set defined by the constraints (5).

In this paper, we propose to assign a prior to

using the standard principal component analysis (PCA) (note again that the data have been centered). Assuming prior independence between

, the following prior is considered for the matrix

(16)

where is an projection matrix containing the first eigenvectors of the sample covariance matrix of the observations (provided by PCA) and is a dispersion parameter that controls the dispersion of the prior. Note that the correlation between spectral bands is implicitly introduced through .

Non-informative priors are assigned to the noise variance and the matrix , i.e,

(17)

where the intervals and cover the possible values of the parameters and . Similarly, the following non-informative prior is assigned to the hyperparameter

(18)

where the interval covers the possible values of the hyperparameter . The resulting directed acyclic graph (DAG) is depicted in Fig. 3.

Fig. 3:

DAG for the parameter priors and hyperpriors (the fixed parameters appear in dashed boxes).

Iii-D Marginalized posterior distribution

Assuming prior independence between , , , and , the marginalized posterior distribution of can be expressed as

(19)

where . Straightforward computations leads to

(20)

where , is an vector, is an matrix and denotes the matrix trace.

Mainly due to the nonlinearity introduced through the nonlinear mapping, a closed form expression for the parameters maximizing the joint posterior distribution (19) is impossible to obtain. We propose to use a scaled conjugate gradient (SCG) method to maximize the marginalized log-posterior. To ensure the sum-to-one constraint for , the following arbitrary reparametrization

is used and the marginalized posterior distribution is optimized w.r.t. the first columns of denoted . The partial derivatives of the log-posterior w.r.t. and are obtained using partial derivatives w.r.t. and

and the classical chain rules (see technical report

[AltmannTR2012] for further details). The resulting latent variable estimation procedure is referred to as locally linear GPLVM (LL-GPLVM).

Note that the marginalized likelihood reduces to the product of

independent Gaussian probability density functions since

(21)

and . Note also that the covariance matrix is related to the covariance matrix of the nd order polynomial kernel [Rasmussen2005, p. 89]. More precisely, the proposed nonlinear mapping corresponds to a particular polynomial kernel whose metric is induced by the matrix . Finally, note that the evaluation of the marginalized likelihood (20) only requires the inversion of the covariance matrix . It can been seen from the following Woodbury matrix identity [brookes05]

(22)

that the computation of mainly relies on the inversion of a matrix. Similarly, the computation of mainly consists of computing the determinant of a matrix, which reduces the computational cost when compared to the structured covariance matrix based approach presented in Section III-A.

Iii-E Estimation of

Let us denote as the maximum a posteriori (MAP) estimator of obtained by maximizing . Using the likelihood (13), the prior distribution (16) and Bayes’ rule, we obtain the posterior distribution of conditioned upon , i.e.,

(23)

where and . Since the conditional posterior distribution of is the product of independent Gaussian distributions, the MAP estimator of conditioned upon is given by

(24)

where , , and . The next section studies a scaling procedure that estimates the abundance matrix using the estimated latent variables resulting from the maximization of (19).

Iv Scaling procedure

The optimization procedure presented in Section III-D provides a set of latent variables that represent the data but can differ from the abundance vectors of interest. Consider obtained after maximization of the posterior (19). The purpose of this section is to estimate an abundance matrix such that

(25)

where occupy the maximal volume in the simplex defined by (4), is an matrix and is an standard i.i.d Gaussian noise matrix which models the scaling errors. Since satisfy the sum-to-one constraint (5), estimating the relation between and is equivalent to estimate the relation between and . However, when considering the mapping between and , non-isotropic noise has to be considered since the rows of and satisfy the sum-to-one constraint, i.e., they belong to the same -dimensional subspace.

Eq. (25) corresponds to a LMM whose noisy observations are the rows of . Since is assumed to occupy the largest volume in the simplex defined by (4), the columns of are the vertices of the simplex of minimum volume that contains . As a consequence, it seems reasonable to use a linear unmixing strategy for the set of vectors to estimate and . In this paper, we propose to estimate jointly and using the Bayesian algorithm presented in [Dobigeon2009] for unsupervised SU assuming the LMM. Note that the algorithm in [Dobigeon2009] assumed positivity constraints for the estimated endmembers. Since these constraints for are unjustified, the original algorithm has slightly been modified by removing the truncations in the projected endmember priors (see [Dobigeon2009] for details). Once the estimator of has been obtained by the proposed scaling procedure, the resulting constrained latent variables denoted as are defined as follows

(26)

with . Using the sum-to-one constraint , we obtain

where is an matrix. The final abundance estimation procedure, including the LL-GPLVM presented in Section III and the scaling procedure investigated in this section is referred to as fully constrained LL-GPVLM (FCLL-GPLVM) (a detailed algorithm is available in [AltmannTR2012]). Once the final abundance matrix and the matrix have been estimated, we propose an endmember extraction procedure based on GP regression. This method is discussed in the next section.

V Gaussian process regression

Endmember estimation is one of the main issues in SU. Most of the existing EEAs intend to estimate the endmembers from the data, i.e., selecting the most pure pixels in the observed image [Chaudhry2006, Winter1999, Nascimento2005]. However, these approaches can be inefficient when the image does not contain enough pure pixels. Some other EEAs based on the minimization of the volume containing the data (such as the minimum volume simplex analysis [Li2008igarss]) can mitigate the absence of pure pixels in the image. This section studies a new endmember estimation strategy based on GP regression for nonlinear mixtures. This strategy can be used even when the scene does not contain pure pixels. It assumes that all the image abundances have been estimated using the algorithm described in Section IV. Consider the set of pixels and corresponding estimated abundance vectors . GP regression first allows the nonlinear mapping in (1) (from the abundance space to the observation space) to be estimated. The estimated mapping is denoted as . Then, it is possible to use the prediction capacity of GPs to predict the spectrum corresponding to any new abundance vector . In particular, the predicted spectra associated with pure pixels, i.e., the endmembers, correspond to abundance vectors that are the vertices of the simplex defined by (4). This section provides more details about GP prediction for endmember estimation.

It can be seen from the marginalized likelihood (20) that is the product of independent GPs associated with each spectral band of the data space (21). Looking carefully at the covariance matrix of (i.e., to ), we can write

(28)

where is the white Gaussian noise vector associated with the th spectral band (having covariance matrix ) and222Note that all known conditional parameters have been omitted for brevity.

(29)

with the covariance matrix of . The vector is referred to as hidden vector associated with the observation . Consider now an test data with hidden vector , abundance vector and . We assume that the test data share the same statistical properties as the training data in the sense that is a Gaussian vector such that

(30)

where is the variance of and contains the covariances between the training inputs and the test inputs, i.e.,

(31)

Straightforward computations leads to

(32)

with

Since the posterior distribution (32) is Gaussian, the MAP and MMSE estimators of equal the posterior mean .

In order to estimate the endmembers, we propose to replace the parameters and by their estimates and and to compute the estimated hidden vectors associated with the abundance vectors for . For each value of , the th estimated hidden vector will be the th estimated endmember333Note that the estimated endmembers are centered since the data have previously been centered. The actual endmembers can be obtained by adding the empirical mean to the estimated endmembers.. Indeed, for the LMM and the bilinear models considered in this paper, the endmembers are obtained by setting in the model (2) relating the observations to the abundances. Note that the proposed endmember estimation procedure provides the posterior distribution of each endmember via (32

) which can be used to derive confidence intervals for the estimates. The next section presents some simulation results obtained for synthetic and real data.

Vi Simulations

Vi-a Synthetic data

The performance of the proposed GPLVM for dimensionality reduction is first evaluated on three synthetic images of pixels. The endmembers contained in these images have been extracted from the spectral libraries provided with the ENVI software [ENVImanual2003] (i.e., green grass, olive green paint and galvanized steel metal). The first image has been generated according to the linear mixing model (LMM). The second image is distributed according to the bilinear mixing model introduced in [Fan2009], referred to as the “Fan model” (FM). The third image has been generated according to the generalized bilinear model (GBM) studied in [Halimi2010] with the following nonlinearity parameters

The abundance vectors

have been randomly generated according to a uniform distribution on the admissible set defined by the positivity and sum-to-one constraints (

4). The noise variance has been fixed to , which corresponds to a signal-to-noise ratio dB which corresponds to the worst case for current spectrometers. The hyperparameter of the latent variable prior (15) has been fixed to and the number of neighbors for the LLE is for all the results presented in this paper. The quality of dimensionality reduction of the GPLVM can be measured by the average reconstruction error (ARE) defined as

(33)

where is the th observed pixel and its estimate. For the LL-GPLVM, the th estimated pixel is given by where is estimated using (24). Table I compares the AREs obtained by the proposed LL-GPLVM and the projection onto the first principal vectors provided by the principal component analysis (PCA). The proposed LL-GPLVM slightly outperforms PCA for nonlinear mixtures in term of ARE. More precisely, the AREs of the LL-GPLVM mainly consist of the noise errors (), whereas model errors are added when applying PCA to nonlinear mixtures. Fig. 4 compares the latent variables obtained after maximization of (20) for the three images to with the projections obtained by projecting the data onto the principal vectors provided by PCA. Note that only dimensions are needed to represent the latent variables (because of the sum-to-one constraint). From this figure, it can be seen that the latent variables of the LL-GPLVM describe a noisy simplex for the three images. It is not the case when using PCA for the nonlinear images. Fig. 5 shows the manifolds estimated by the LL-GPLVM for the three images to . This figure shows that the proposed LL-GPLVM can model the manifolds associated with the image pixels with good accuracy.

ARE ()
PCA 0.99 1.08 1.04 1.00 1.06 1.03
LL-GPLVM 0.99 0.99 1.00 1.00 1.00 0.99
SU 1.00 1.13 1.06 1.14 1.57 1.12
FCLL-GPLVM 0.99 0.99 1.00 1.00 1.00 0.99
TABLE I: AREs: synthetic images.
Fig. 4: Top: Representation of the pixels (dots) using the first two principal components provided by the standard PCA for the three synthetic images to . Bottom: Representation using the latent variables estimated by the LL-GPLVM for the three synthetic images to .
(a) (LMM) (b) (FM) (c) (GBM)
Fig. 5: Visualization of the pixels (black dots) of , and using the 3 axis provided by the PCA procedure. The colored surface is the manifold identified by the LL-GPLVM.

The quality of unmixing procedures can also be measured by comparing the estimated and actual abundances using the root normalized mean square error (RNMSE) defined by

(34)

where is the th actual abundance vector and its estimate. Table II compares the RNMSEs obtained with different unmixing strategies. The endmembers have been estimated by the VCA algorithm in all simulations. The algorithms used for abundance estimation are the FCLS algorithm proposed in [Heinz2001] for , the LS method proposed in [Fan2009] for and the gradient-based method proposed in [Halimi2010] for . These procedures are referred to as “SU” in the table. These strategies are compared with the proposed FCLL-GPLVM. As mentioned above, the Bayesian algorithm for joint estimation of and under positivity and sum-to-one constraints for (introduced in [Dobigeon2009]) is used in this paper for the scaling step. It can be seen that the proposed FCLL-GPLVM is general enough to accurately approximate the considered mixing models since it provides the best results in term of abundance estimation.

RNMSE ()
SU 5.7 7.4 22.7 49.3 86.6 47.8
FCLL-GPLVM 3.9 4.2 5.4 4.8 7.2 7.5
TABLE II: RNMSEs: synthetic images.

The quality of reconstruction of the unmixing procedure is also evaluated by the ARE. For the FCLL-GPLVM, the th reconstructed pixel is given by . Table I also shows the AREs corresponding to the different unmixing strategies. The proposed FCLL-GPLVM outperforms the other strategies in term of ARE for these images.

Finally, the performance of the FCLL-GPLVM for endmember estimation is evaluated by comparing the estimated endmembers with the actual spectra. The quality of endmember estimation is evaluated by the spectral angle mapper (SAM) defined as

(35)

where is the th actual endmember and its estimate. Table III compares the SAMs obtained for each endmember using the VCA algorithm, the nonlinear EEA presented in [Heylen2011] (referred to as “Heylen”) and the FCLL-GPLVM for the three images to . These results show that the FCLL-GPLVM provides accurate endmember estimates for both linear and nonlinear mixtures.

VCA Heylen FCLL-GPLVM
0.43 1.94 0.52
0.22 0.66 0.86
0.22 0.78 0.15
1.62 0.75 0.33
2.08 1.69 0.53
1.15 0.42 0.34
1.91 1.80 0.44
1.36 0.86 0.58
0.88 1.38 0.30
TABLE III: SAMs (): synthetic images.

Finally, the performance of the proposed unmixing algorithm is tested in scenarios where pure pixels are not present in the observed scene. More precisely, the simulation parameters remain the same for the three images to except for the abundance vectors, that are drawn from a uniform distribution in the following set

(36)

The three resulting images are denoted as , and . Table I shows that the absence of pure pixels does not change the AREs significantly when they are compared with those obtained with the images to . Moreover, FCLL-GPLVM is more robust to the absence of pure pixels than the different SU methods. The good performance of FCLL-GPVLM is due in part to the scaling procedure. Table II shows that the performance of the FCLL-GPLVM in term of RNMSE is not degraded significantly when there is no pure pixel in the image (note that the situation is different when the endmembers are estimated using VCA). Table IV shows the performance of the FCLL-GPLVM for endmember estimation when there are no pure pixels in the image. The results of the FCLL-GPLVM do not change significantly when they are compared with those obtained with images to , which is not the case for the two other EEAs. The accuracy of the endmember estimation is illustrated in Fig. 6 which compares the endmembers estimated by the FCLL-GPLVM (blue lines) to the actual endmember (red dots) and the VCA estimates (black line) for the image .

Fig. 6: Actual endmembers (red dots) and endmembers estimated by the FCLL-GPLVM (blue lines) and VCA (black line) for the image .

The next section presents simulation results obtained for real data.

VCA Heylen FCLL-GPLVM
2.87 6.38 0.38
2.15 11.11 1.30
2.10 2.62 0.24
5.22 7.53 0.67
8.02 9.59 1.46
7.10 2.48 0.53
6.89 6.59 0.61
6.03 5.95 1.75
3.73 2.36 0.48
TABLE IV: SAMs (): synthetic images.

Vi-B Real data

Fig. 7: Real hyperspectral data: Madonna data acquired by the Hyspex hyperspectral scanner over Villelongue, France (left) and the region of interest shown in true colors (right).

The real image considered in this section was acquired in 2010 by the Hyspex hyperspectral scanner over Villelongue, France. spectral bands were recorded from the visible to near infrared with a spatial resolution of m. This dataset has already been studied in [Sheeren2011] and is mainly composed of different vegetation species. The sub-image of size pixels chosen here to evaluate the proposed unmixing procedure is depicted in Fig. 7. This image is mainly composed of three components since the data belong to a two-dimensional manifold (see black dots of Fig. 8 (a)). Consequently, we assume that the scene is composed of endmembers. Using the ground truth used in [Sheeren2011], we can determine some tree species present in the scene of interest. More precisely, Fig. 8 (a) shows the ground truth clusters corresponding to oak trees (red dots) and chestnut trees (blue dots) projected in a 3-dimensional subspace (defined by the first three principal components of a PCA applied to the image of Fig. 7). These two clusters are close to vertices of the data cloud. Consequently, oak and chestnut trees are identified as endmembers present in the image. Moreover, the third endmember is not a tree species according to the ground truth provided in [Sheeren2011]. In the sequel, this endmember will be referred to as Endmember .

(a) (b) (c)
Fig. 8: (a): Representation of the pixels (black dots) of the Madonna image and the ground truth clusters corresponding to oak trees (red dots) and chestnut trees (blue dots) using the first three principal components provided by the standard PCA. (b): Representation of the pixels (dots) of the Madonna data and manifold identified by the LL-GPLVM (colored surface). (c):Representation of the pixels (dots) of the Madonna data and boundaries of the estimated transformed simplex (blue lines).

The simulation parameters have been fixed to and . The latent variables obtained by maximizing the marginalized posterior distribution (10) are depicted in Fig. 9 (blue dots).

Fig. 9: Representation of the latent variables (dots) estimated by the LL-GPLVM and the simplex identified by the scaling step (red lines) for the Madonna data.

It can be seen from this figure that the latent variables seem to describe a noisy simplex. Fig. 8 (b) shows the manifold estimated by the proposed LL-GPLVM. This figure illustrates the capacity of the LL-GPLVM for modeling the nonlinear manifold. Table V (left) compares the AREs obtained by the proposed LL-GPLVM and the projection onto the first principal vectors provided by PCA. The proposed LL-GPLVM slightly outperforms PCA for the real data of interest, which shows that the proposed nonlinear dimensionality reduction method is more accurate than PCA (linear dimensionality reduction) in representing the data. The scaling step presented in Section IV is then applied to the estimated latent variables. The estimated simplex defined by the latent variables is depicted in Fig. 9 (red lines). Fig. 8 (c) compares the boundaries of the estimated transformed simplex with the image pixels. The abundance maps obtained after the scaling step are shown in Fig. 10 (top). The results of the unmixing procedure using the FCLL-GPLVM are compared to an unmixing strategy assuming the LMM. More precisely, we use VCA to extract the endmembers from the data and use the FLCS algorithm for abundance estimation. The estimated abundance maps are depicted in Fig. 10 (bottom). The abundance maps obtained by the two methods are similar which shows the accuracy of the proposed unmixing strategy when considering the LMM as a first order approximation of the mixing model.

PCA LL-GPLVM VCA+FCLS FCLL-GPLVM
0.84 0.79 1.30 1.11
TABLE V: AREs: real image ().

Moreover, Fig. 11 shows the classification map obtained in [Sheeren2011] for the region of interest. The unclassified pixels correspond to areas where the classification method of [Sheeren2011]

has not been performed. Even if lots of pixels are not classified, the classified pixels can be compared with the estimated abundance maps. First, we can note the presence of the same tree species in the classification and abundance maps, i.e., oak and chestnut. We can also see that the pixels composed of chestnut trees and Endmember

are mainly located in the unclassified regions, which explains why they do not appear clearly in the classification map. Only one pixel is classified as being composed of ash trees in the region of interest. If unclassified pixels also contain ash trees, they are either too few or too mixed to be considered as mixtures of an additional endmember in the image. Finally, it can be seen from Figs. 10 and 11 that oak trees are located within similar regions (left corners and top right corner) for the abundance and classification maps.

Evaluating the performance of endmember estimation on real data is an interesting problem. However, comparison of the estimated endmembers with the ground truth is difficult here. First, since the nature of Endmember is unknown, no ground truth is available for this endmember. Second, because of the variability of the ground truth spectra associated with each tree species, it is difficult to show whether VCA or the proposed FCLL-GPLVM provides the best endmember estimates. However, the AREs obtained for both methods (Table V, right) show that the FCLL-GPLVM fits the data better than the linear SU strategy, which confirms the importance of the proposed algorithm for nonlinear spectral unmixing.

Fig. 10: Top: Abundance maps estimated using the FCLL-GPLVM for the Madonna image. Bottom: Abundance maps estimated using the VCA algorithm for endmember extraction and the FCLS algorithm for abundance estimation.
Fig. 11: Classification map obtained in [Sheeren2011] for the region of interest of the Madonna image.

Vii Conclusions

We proposed a new algorithm for nonlinear spectral unmixing based on a Gaussian process latent variable model. The unmixing procedure assumed a nonlinear mapping from the abundance space to the observed pixels. It also considered the physical constraints for the abundance vectors. The abundance estimation was decomposed into two steps. Dimensionality reduction was first achieved using latent variables. A scaling procedure was then proposed to estimate the abundances. After estimating the abundance vectors of the image, a new endmember estimator based on Gaussian process regression was investigated. Simulations conducted on synthetic and real images illustrated the flexibility of the proposed model for linear and nonlinear spectral unmixing and provided promising results for abundance and endmember estimations in spite of the absence of pure pixels in the image. The choice of the nonlinear mapping used for the GP model is an important issue to ensure that the LL-GPLVM is general enough to handle different nonlinearities. In particular, different mappings could be used for intimate mixtures. Moreover, including the GPLVM in a Bayesian framework could also be an interesting prospect to consider spatial correlation through Markov random fields or GPs.

Acknowledgments

The authors would like to thank Dr. Mathieu Fauvel from the University of Toulouse - INP/ENSAT, Toulouse, France, for supplying the real image, ground truth data and classification map related to the classification algorithm studied in [Sheeren2011] and used in this paper.

References