I Introduction
Hyperspectral imaging has attracted formidable interest from the scientific community in the past two decades, and hyperspectral images (HI) have been explored in a vast, and increasing, number of applications in different fields [1]. The limited spatial resolution of hyperspectral devices often mixes the spectral contributions of different pure materials, termed endmembers, in the scene [2]. This phenomenon is more explicit in remote sense applications, due to the distance between airborne and spaceborne sensors and the target scene. The mixing process must be well understood for the vital information relating the pure materials and their distribution in the scene to be accurately unveiled. Hyperspectral unmixing (HU) aims at solving this problem by decomposing the hyperspectral image into a collection of endmembers and their fractional abundances [3].
Different mixing models have been employed to explain the interaction between light and the endmembers [1][4]
. The simplest and most widely used model is the Linear Mixing Model (LMM)
[2], which assumes that the observed reflectance vector (
i.e. a pixel) can be modeled as a convex combination of the spectral signatures of each endmember present in the scene. Convexity imposes positivity and sumtoone constraints on the linear combination coefficients. Hence, they represent the fractional abundances with which the endmembers contribute to the scene. Though the simplicity of the LMM leads to fast and reliable unmixing strategies in some situations, it turns out to be simplistic to explain the mixing process in many practical applications. Hence, several approaches have been proposed in the literature to account for nonlinear mixing effects [4, 5, 6] and endmember variability [7, 8, 9] often present in practical scenes.A myriad of factors can induce endmember variability, including environmental, illumination, atmospheric and temporal changes [7]. If not properly considered, such variability can result in significant estimation errors being propagated throughout the unmixing process [10]
. Most of the methods proposed so far to deal with spectral variability can be classified in three major groups: endmembers as sets, endmembers as statistical distributions and, more recently, methods that incorporate the variability in the mixing model, often using physically motivated concepts
[8]. The method proposed in this work belongs to the third group.Recently, [10], [11] and [12] introduced variations of the LMM to cope with the spectral variability. Unmixing using these models lead to illposed problems that were solved by using a combination of different regularizations terms and variable splitting optimization strategies. The model Perturbed LMM (PLMM) in [10]
augmented the endmember matrix with an additive perturbation matrix that needs to be estimated jointly with the abundances. Although the additive perturbation can model arbitrary endmember variations, it is not physically motivated, and the excessive amount of degrees of freedom makes the problem even harder to solve. The Extended LMM (ELMM) proposed in
[11] introduces one new multiplicative term for each endmember, and can efficiently model changes in the observed reflectance due to illumination effects [11]. This model has a clear physical motivation, but its modeling capability is limited. The Generalized Linear Mixing Model (GLMM) proposed in [12] generalizes the ELMM to account for variability in all regions of the measured spectrum. The GLMM is physically motivated and capable of modeling arbitrary variability, resulting in improved accuracy at the expense of a small increase in the computational complexity, when compared to the ELMM.The above mentioned methods resort to different strategies to regularize the illposed optimization problem leading to the estimation of abundances and endmembers. The regularization is achieved by introducing into the unmixing problem additional information based on common knowledge about the lowdimensionality of structures embedded in hyperspectral images.
Possible ways to recover lowerdimensional structures from noisy and corrupted data include the imposition of lowrank matrix constraints on the estimation process [13], or the lowrank decomposition of the observed data [14, 15]. The facts that HIs are naturally represented and treated as tensors, and that lowrank decompositions of higherorder (2) tensors tend to capture homogeneities within the tensor structure make such strategies even more attractive for HU. Lowrank tensor models have been successfully employed in various tasks involving HIs, such as recovery of missing pixels [16]
[17], classification [18], compression [19], dimensionality reduction [20] and analysis of multiangle images [21]. More recently, [22] and [21] considered lowrank tensor decompositions applied to standard and multitemporal HU, respectively.In [22] the HI is treated as threedimensional tensor, and spatial regularity is enforced through a nonnegative tensor factorization (NTF) strategy that imposes a lowrank tensor structure. In [21], nonnegative canonical polyadic decomposition were used to unmix multitemporal HIs represented as threedimensional tensors built by stacking multiple temporal matricized HIs. Though a lowrank tensor representation may naturally describe the regularity of HIs and abundance maps, the forceful introduction of stringent rank constraints may prevent an adequate representation of fast varying structures that are important for accurate unmixing. Another limitation of the approaches proposed in [22] and [21] is the lack of guarantee that endmembers and abundances will be correctly factorized into their respective tensors. In [23], we proposed a new lowrank HU method called Unmixing with Lowrank Tensor Regularization Algorithm (ULTRA), which accounts for highly correlated endmembers. The HU problem was formulated using tensors and a lowrank abundance tensor regularization term was introduced. Differently, from the strict tensor decomposition considered in [22, 21], ULTRA allowed important flexibility to the rank of the estimated abundance tensor to adequately represent fine scale structure and details that lie beyond a lowrank structure, but without compromising the regularity of the solution.
In this work we extend the strategy proposed in [23] to account for endmember variability. A second lowrank regularization is imposed on the fourdimensional endmember tensor, which contains one endmember matrix for each pixel. The strategy results in an iterative algorithm, named Unmixing with Lowrank Tensor Regularization Algorithm accounting for endmember Variability
(ULTRAV). At each iteration, ULTRAV updates the estimations of the abundance and endmember tensors as well as their lowrank approximations. One important challenge of using rankbased approximations is that determination of the rank of tensors is a difficult task. An important contribution of this work is the proposition of a strategy to determine the smallest rank representation that contains most of the variation of multilinear singular values
[24]. Simulation results using synthetic and real data illustrate the performance improvement obtained using ULTRAV when compared to competing methods, as well as its competitive computational complexity for relatively small images.The paper is organized as follows. Section II briefly reviews important background on linear mixing models and definitions and notation used for tensors. Section III presents the proposed solution and the strategy to estimate tensor ranks. Section IV presents the simulation results and comparisons. Finally, Section V presents the conclusions.
Ii Background and notation
Iia Extended Linear Mixing Models
The Linear Mixing Model (LMM) [2] assumes that a given pixel , with bands, is represented as
(1) 
where is an matrix whose columns are the endmembers , is the abundance vector, is an additive white Gaussian noise (WGN), is the identity matrix, and is the entrywise operator. The LMM assumes that the pure material endmembers are fixed for all pixels , , in the HI. This assumption can jeopardize the accuracy of estimated abundances in many circumstances due to the spectral variability existing in a typical scene.
Different extensions of the LMM have been recently proposed to mitigate this limitation. These models employ a different endmember matrix for each pixel, and are particular cases of the model
(2) 
where is the endmember matrix for the
th pixel. Different parametric models propose different forms for
to account for spectral variability. These include additive perturbations over a mean matrix in the PLMM [10], multiplicative factors applied individually to each endmember in the ELMM [11] or to each band in the GLMM [12]. Moreover, spatial regularization of the multiplicative scaling factors in the ELMM and GLMM help to further mitigate the illposedness of the problem.These models, however, barely exploit the high dimensional structure of the problem, which naturally suggests the representation of the HI, abundance maps, and endmember matrices for all pixels as higherorder tensors. In this work, instead of introducing a rigid parametric model for the endmembers, we employ a more general tensor model, using a welldevised lowrank constraint to introduce regularity to the estimated endmember tensor.
IiB Notation
An order tensor () is an array with elements indexed by . The dimensions of a tensor are called modes. A mode fiber of tensor is the onedimensional subset of obtained by fixing all but the th dimension, and is indexed by . A slab or slice of tensor is a twodimensional subset of obtained by fixing all but two of its modes. An HI is often conceived as a three dimensional data cube, and can be naturally represented by an order3 tensor , containing pixels represented by the tensor fibers . Analogously, the abundances can also be collected in an order3 tensor . Thus, given a pixel , the respective abundance vector is represented by the mode3 fiber . Similarly, the endmember matrices for each pixel can be represented as an order4 tensor , where . We now review some operations of multilinear algebra (the algebra of tensors) that will be used in the following sections (more details can be found in [25]).
IiC Tensor product definitions
Definition 1.
Outer product: The outer product between vectors is defined as the order tensor , where and is the th position of . It generalizes the outer product between two vectors.
Definition 2.
Mode product: The mode product, denoted , of a tensor and a matrix is evaluated such that each mode fiber of is multiplied by matrix , yielding .
Definition 3.
Multilinear product: The full multilinear product, denoted by , consists of the successive application of mode products between and matrices , represented as .
Definition 4.
Mode(M,1) contracted product: The contracted modeM product, denoted by , is a product between a tensor and a vector in modeM, where the resulting singleton dimension is removed, given by .
IiD The Canonical Polyadic Decomposition
An order rank1 tensor is obtained as the outer product of vectors. The rank of an order tensor is defined as the minimum number of order rank1 tensors that must be added to obtain [15]. Thus, any tensor with can be decomposed as a linear combination of at least outer products of rank1 tensors. This socalled polyadic decomposition is illustrated in Figure 1. When this decomposition involves exactly terms, it is called the canonical polyadic decomposition (CPD) [25] of a rank tensor , and is given by
(3) 
It has been shown that this decomposition is essentially unique under mild conditions [15]. The CPD can be written alternatively using mode products as
(4) 
or using the full multilinear product as
(5) 
where is the dimensional diagonal tensor and , for . Given a tensor , the CPD can be obtained by solving the following optimization problem [15]
(6) 
A widely used strategy to compute an approximate solution to (6) is to use an alternating leastsquares technique [15], which optimizes the cost function with respect to one term at a time, while keeping the others fixed, until convergence. Although optimization problem (6) is generally nonconvex, its solution is unique under relatively mild conditions, which is an important advantage of tensorbased methods [15].
IiE Tensor rank bounds
Finding the rank of an arbitrary tensor is NPhard [26]. In [15], upper and lower bounds on tensor ranks are presented for arbitrary tensors. Let be an order3 tensor and
(7)  
be the mode1 (column), mode2 (row) and mode3 (fiber) ranks, respectively, of . Thus, the which is able to represent an arbitrary tensor is limited in the interval
(8) 
The reader can note that the bounds presented above often lead to very large tensor ranks. In many practical applications, however, the “useful signal” rank is often much less than the actual tensor rank [15]. Hence, when lowrank decompositions are employed to extract lowdimensional structures from the signal, the ranks that lead to meaningful results are usually much smaller than . In Section IIIE, we propose a strategy to estimate the rank of tensor CPDs based on the variation of the multilinear singular values of .
Iii Lowrank Unmixing Problem
An effective strategy to capture the lowdimensional structures of HIs for solving the HU problem is to impose a lowrank structure to the abundance tensor [22]. The same strategy can also be applied to the endmember tensor if one considers the endmember variabilities to be small or highly correlated in lowdimensional structures within the HI. Thus, assuming that has a lowrank , and that has a lowrank the global cost functional for the unmixing problem can be written as
(9) 
Defining the HU problem as in (9) with fixed data independent ranks and limits its flexibility to adequately represent the desired abundance maps and endmember variability. Though fixing low ranks for and tends to capture the most significant part of the tensors energy [27], one may incur in a loss of fine and small scale details that may be relevant for specific data. On the other hand, using large values for and makes the solution sensitive to noise, undermining the purpose of regularization. Thus, an important issue is how to effectively impose the lowrank constraint to achieve regularity in the solution without undermining its flexibility to adequately model small variations and details.
We propose to modify (9) by introducing new regularization terms, controlled by two lowrank tensors and , to impose nonstrict constraints on and . Doing that, tensors and work as a priori information, and the strictness of the lowrank constraint is controlled by two additional parameters . The proposed cost function is given by
(10) 
with and . The optimization problem becomes
(11) 
To solve (11), we propose to find a local stationary point by minimizing (10) iteratively with respect to each variable. The resulting algorithm is termed the Unmixing with Lowrank Tensor Regularization Algorithm accounting for spectral Variability (ULTRAV), and is presented in Algorithm 1. The intermediate steps are detailed in the following.
Iiia Solving with respect to
IiiB Solving with respect to
Analogously to the previous section, to solve problem (11) with respect to , we use only the terms in (10) that depend on , leading to
(13)  
s. t. 
which results in a regularized nonnegative leastsquares problem. An approximate solution can be obtained ignoring the positivity constraint over the endmember tensor and projecting the leastsquares result onto the positive orthant as [12]
(14) 
where is the projection operator that maps every negative element to zero.
IiiC Solving with respect to
Rewriting the terms in (10) that depend on leads to
(15) 
Assuming that most of the energy of lies in a lowrank structure, we write the tensor as a sum of a small number of rank1 components, such that
(16) 
This introduces a lowrank a priori condition on whose strictness can be controlled by the regularization constant . Using (16) in (15) leads to the optimization problem
(17)  
where is a 4 dimensional diagonal tensor with . Problem (17) can be solved using an alternating leastsquares strategy [15].
Finally, the solution is obtained from , , and using the full multilinear product as
(18) 
IiiD Solving with respect to
Analogous to the previous section, the cost function to be optimized for can be written as
(19) 
Assuming that most of the energy of lies in a lowrank structure, we write tensor as a sum of a small number of rank1 components, such that
(20) 
This introduces a lowrank a priori condition on , which will be more or less enforced depending on the regularization constant . Using (20) in (19) leads to the optimization problem
(21)  
where is an order3 diagonal tensor with . Problem (21) can be solved using an alternating leastsquares strategy [15]. Finally, the solution is obtained from and using the full multilinear product as
(22) 
IiiE Estimating tensor ranks
In Section IIE we have recalled important results relating bounds for order3 tensor ranks to the span of the matricized versions of tensors. We have also noted, from our own experience, that those bounds tend to indicate tensor ranks that are larger than the rank associated with the information relevant for HU. Our interest in HU is to model lowdimensional structures of the HI using lowrank tensors. At the same time, this lowrank representation should be rich enough to include all dimensions of the original HI tensor that contain relevant information. We propose to approximate the “useful rank” of a tensor by the number of the largest singular values of their matricized versions required to represent most of the tensor energy.
Let be the matricization of an arbitrary tensor obtained by stacking all tensor fibers along the th tensor dimension. Let be the set of singular values of , sorted descending in value. Also, let be the vector of first order differences of the elements of , such that, . Then, we define the th candidate for rank of as the smallest index such that sufficiently small, namely,
(23) 
where is a parameter limiting the singular value variation. In all experiments reported here we used . Finally, we approximate the rank of tensor as
(24) 
For the experiments reported in this paper, we have used definition (24) to estimate and in (16) and (20) from the abundance and endmember tensors estimated using simple unmixing strategies such as the scaled constrained least squares (SCLS) [11].
Data Cube 0 – DC0  

Time  
FCLS  1.81      16.11  5.92  0.42 
SCLS  0.68  175.79  6.19  59.38  5.42  0.38 
PLMM  1.05  88.07  5.64  4.72  3.37  81.89 
ELMM  0.35  106.17  5.63  4.78  3.40  17.15 
GLMM  0.34  101.51  5.87  5.7e3  0.09  20.23 
ULTRAV  0.23  92.39  5.56  0.73  1.32  14.46 
Data Cube 2 – DC1  
FCLS  2.01      6.93  3.71  0.74 
SCLS  2.07  92.16  5.37  24.78  3.41  0.76 
PLMM  1.86  60.83  4.34  2.39  2.27  88.35 
ELMM  1.29  69.15  5.95  0.01  0.09  23.18 
GLMM  1.20  68.11  6.09  0.01  0.08  29.81 
ULTRAV  1.12  60.12  5.21  4.26  2.56  29.47 
Data Cube 3 – DC2  
FCLS  1.90      2.03  13.69  0.30 
SCLS  0.71  1.66  2.29  1.07  12.68  0.30 
PLMM  1.27  2.45  2.28  2.12  12.17  61.13 
ELMM  0.63  2.84  3.32  2.31  13.13  11.15 
GLMM  0.59  1.84  2.79  2.04  12.58  11.60 
ULTRAV  0.46  2.91  2.94  9e5  0.49  5.19 
Iv Simulations
In this section, the performance of the proposed methodology is illustrated through simulations with both synthetic and real data. We compare the proposed ULTRAV method with the the fully constrained least squares (FCLS), the scaled constrained least squares (SCLS) [11], the PLMM [10], the ELMM [11], and the GLMM [12].
To measure the accuracy of the unmixing methods we consider the Mean Squared Error (MSE)
(25) 
where is the vectorization operator , , , and the Spectral Angle Mapper for the HI
(26) 
and for the endmembers tensor
(27) 
Iva Synthetic data
For a comprehensive comparison among the different methods we created three synthetic datasets, namely Data Cube 0 (DC0), Data Cube 1 (DC1) and Data Cube 2 (DC2), with 5050 pixels (DC0 and DC2) and 70x70 pixels (DC1). DC0 and DC1 were built using three 224band endmembers extracted from the USGS Spectral Library [28], while DC2 was built using three 16band minerals often found in bodies of the Solar System. For the three datasets, spatially correlated abundance maps were used, as depicted in the first column of Fig. 2. For DC0, we adopted the variability model used in [11] (a multiplicative factor acting on each endmember). For DC1, we used the variability model according to the PLMM [10]. For DC2, we used the Hapke model [29] devised to realistically represent the spectral variability introduced due to changes in the illumination conditions caused by the topography of the scene [11]. White Gaussian noise was added to all datasets to yield a 30dB SNR. In all cases, we used endmembers extracted using the VCA [30] either to build the reference endmember matrix or to initialize the different methods. The abundance maps were initialized using the maps estimated by the SCLS.
To select the optimal parameters for each algorithm, we performed grid searches for each dataset. We used parameter search ranges based on the ranges tested and discussed by the authors in the original publications. For the PLMM we used , since the authors fixed this parameter in all simulations, and searched for and in the ranges and , respectively. For both ELMM and GLMM, we selected the parameters among the following values: , , and , while for the proposed ULTRAV we selected the parameters in the intervals and .
The results are shown in Table I, were the best and second best results for each metric are marked in bold red and bold blue, respectively. ULTRAV clearly outperformed the competing algorithms for all datasets in terms of . For the other metrics, the best results depended on the datasets. In terms of and , ULTRAV yielded the second best result for DC0 and the best result for DC1. Finally, ULTRAV results for and were the second best for DC0 and the best for DC2. The execution times, shown in the rightmost columns of Table I, show that ULTRAV required the smallest execution time among the more sophisticated algorithms (PLMM, ELMM and GLMM) for DC0 and DC2, and comparable execution time for DC1.
IvA1 Parameters sensitivity
While we have proposed a strategy to determine rank values for tensors and , the parameters and need to be selected by the user. We now study the sensitivity of the ULTRAV performance to variations of the parameters within the parameter search intervals presented in the previous section. Figure 3 shows the values of resulting from unmixing the data using each combination of the parameter values. The sensitivity clearly tends to increase when values less than 1 are used for both parameters. Our practical experience indicates that good results can be obtained using in , and large values about 100 for . Moreover, some insensitivity is verified for small changes in about large values. Thus, searching in with values spaced by decades as done for the examples in the previous section seems reasonable.
IvB Real data
We considered the Houston and Cuprite datasets discussed in [11] for simulations with real data. Both datasets were captured by the AVIRIS, which originally has 224 spectral bands. The water absorption bands were removed resulting in 188 bands. The Houston data set is known to have four predominant endmembers while previous studies indicate the presence of at least 14 endmembers in the Cuprite mining field [30]. For both images the endmembers were extracted using the VCA [30]. Fig. 4 shows the reconstructed abundance maps for both images and for all tested methods. Table II shows the quantitative results in terms of and . The last column in Fig. 4 shows that the proposed ULTRAV method provided smooth and accurate abundance estimation, clearly outperforming the competing algorithms. In fact, for the Concrete and Metallic Roofs endmembers, the ULTRAV abundance map presents stronger Concrete and Metallic Roofs components in the stadium stands and stadium towers, respectively, when compared with the other methods equipped for dealing with spectral variability. Furthermore, the asphalt distribution seems more coherent.
For the Cuprite Mining Field we selected four endmembers whose distribution in the scene can be clearly distinguished [30], and that were also used in [11, 31, 32]. Examining the abundance maps at the last column of Fig. 4, ULTRAV presents smoother abundance maps for all four depicted endmembers while keeping strong components in regions where these endmembers are known to exist.
The objective metrics presented in Table II indicate that ULTRAV yields competitive reconstruction errors in terms of MSE and SAM. These results, however, should be interpreted with proper care, as the connection of reconstruction error and abundance estimation is not straightforward. The execution times in Table II indicate that ULTRAV did not scale well with the larger image sizes and higher number of endmembers of the Cuprite data, which directly impacted the CPD stage of ULTRAV. Moreover, the more complex images resulted in higher rank estimates using the strategy discussed in Section IIIE. This indicates that there is still room for improving the proposed method by either providing a segmentation strategy or using faster CPD methods. This, however, is an open problem that will be addressed in future works.
Algorithm  Houston Data  Cuprite Data  

Time  Time  
FCLS  0.2283  5.7695  1.90  0.0114  2.7008  15.90 
SCLS  0.0037  2.5361  2.04  0.0088  7.8559  14.09 
PLMM  0.0190  1.6201  454.90  0.0027  7.6452  6971.04 
ELMM  0.0010  1.3225  62.66  0.0025  7.7055  474.45 
GLMM  1.0e5  0.0472  32.26  0.0011  7.6264  1326.85 
ULTRAV  0.0018  1.5043  264.71  0.0012  0.8487  2092.18 
V Conclusions
In this paper, we proposed a new lowrank regularization strategy for introducing lowdimensional spatialspectral structure into the abundance and endmember tensors for hyperspectral unmixing considering spectral variability. The resulting iterative algorithm, called ULTRAV, imposes lowrank structure by means of regularizations that force most of the energy of the estimated abundances and endmembers to lay within a lowdimensional structure. The proposed approach does not confine the estimated abundances and endmembers to a strict lowrank structure, which would not adequately account for the complexity experienced in realworld scenarios. The proposed methodology includes also a strategy to estimate the rank of the regularization tensors and , leaving only two parameters to be adjusted within a relatively reduced search space. Simulation results using both synthetic and real data showed that the ULTRAV can outperform stateoftheart unmixing algorithms accounting for spectral variability.
References
 [1] J. M. BioucasDias, A. Plaza, G. CampsValls, P. Scheunders, N. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geoscience and Remote Sensing Magazine, vol. 1, no. 2, pp. 6–36, 2013.
 [2] N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 44–57, 2002.
 [3] J. M. BioucasDias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, “Hyperspectral unmixing overview: Geometrical, statistical, and sparse regressionbased approaches,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 354– 379, 2012.
 [4] N. Dobigeon, J.Y. Tourneret, C. Richard, J. C. M. Bermudez, S. McLaughlin, and A. O. Hero, “Nonlinear unmixing of hyperspectral images: Models and algorithms,” IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 82–94, Jan 2014.
 [5] T. Imbiriba, J. C. M. Bermudez, C. Richard, and J.Y. Tourneret, “Nonparametric detection of nonlinearly mixed pixels and endmember estimation in hyperspectral images,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1136–1151, March 2016.
 [6] T. Imbiriba, J. C. M. Bermudez, and C. Richard, “Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2179–2191, May 2017.
 [7] A. Zare and K. C. Ho, “Endmember variability in hyperspectral analysis: Addressing spectral variability during spectral unmixing,” Signal Processing Magazine, IEEE, vol. 31, p. 95, January 2014.
 [8] L. Drumetz, J. Chanussot, and C. Jutten, “Variability of the endmembers in spectral unmixing: recent advances,” in 8th IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Los Angeles, USA, 2016.
 [9] R. A. Borsoi, T. Imbiriba, and J. C. M. Bermudez, “Superresolution for hyperspectral and multispectral image fusion accounting for seasonal spectral variability,” arXiv preprint arXiv:1808.10072, 2018.
 [10] P.A. Thouvenin, N. Dobigeon, and J.Y. Tourneret, “Hyperspectral unmixing with spectral variability using a perturbed linear mixing model,” IEEE Trans. Signal Processing, vol. 64, no. 2, pp. 525–538, Feb. 2016.
 [11] L. Drumetz, M.A. Veganzones, S. Henrot, R. Phlypo, J. Chanussot, and C. Jutten, “Blind hyperspectral unmixing using an extended linear mixing model to address spectral variability,” IEEE Transactions on Image Processing, vol. 25, no. 8, pp. 3890–3905, 2016.
 [12] T. Imbiriba, R. A. Borsoi, and J. C. M. Bermudez, “Generalized linear mixing model accounting for endmember variability,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, April 2018, pp. 1862–1866.
 [13] M. Tao and X. Yuan, “Recovering lowrank and sparse components of matrices from incomplete and noisy observations,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 57–81, 2011.
 [14] M. Bousse, O. Debals, and L. De Lathauwer, “A tensorbased method for largescale blind source separation using segmentation,” IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 346–358, 2017.

[15]
N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,”
IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017.  [16] M. K.P. Ng, Q. Yuan, L. Yan, and J. Sun, “An adaptive weighted tensor completion method for the recovery of remote sensing images with missing data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 6, pp. 3367–3381, 2017.
 [17] X. Zhang, G. Wen, and W. Dai, “A tensor decompositionbased anomaly detection algorithm for hyperspectral image,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 10, pp. 5801–5820, 2016.
 [18] X. Guo, X. Huang, L. Zhang, L. Zhang, A. Plaza, and J. A. Benediktsson, “Support tensor machines for classification of hyperspectral remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 6, pp. 3248–3264, 2016.
 [19] S. Yang, M. Wang, P. Li, L. Jin, B. Wu, and L. Jiao, “Compressive hyperspectral imaging via sparse tensor and nonlinear compressed sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 11, pp. 5943–5957, 2015.

[20]
L. Zhang, L. Zhang, D. Tao, and X. Huang, “Tensor discriminative locality alignment for hyperspectral image spectral–spatial feature extraction,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 1, pp. 242–256, 2013.  [21] M. A. Veganzones, J. E. Cohen, R. C. Farias, J. Chanussot, and P. Comon, “Nonnegative tensor cp decomposition of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp. 2577–2588, 2016.
 [22] Y. Qian, F. Xiong, S. Zeng, J. Zhou, and Y. Y. Tang, “Matrixvector nonnegative tensor factorization for blind unmixing of hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 3, pp. 1776–1792, 2017.
 [23] T. Imbiriba, R. A. Borsoi, and J. C. M. Bermudez, “A lowrank tensor regularization strategy for hyperspectral unmixing,” IEEE Statistical Signal Processing Workshop, pp. 373 – 377, 2018.

[24]
L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,”
SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000.  [25] A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. Phan, “Tensor decompositions for signal processing applications: From twoway to multiway component analysis,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145–163, 2015.
 [26] J. Håstad, “Tensor rank is npcomplete,” Journal of Algorithms, vol. 11, no. 4, pp. 644–654, 1990.
 [27] S. Mei, J. Hou, J. Chen, L. P. Chau, and Q. Du, “Simultaneous spatial and spectral lowrank representation of hyperspectral images for classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. PP, no. 99, pp. 1–15, 2018.
 [28] R. N. Clark, G. A. Swayze, K. E. Livo, R. F. Kokaly, S. J. Sutley, J. B. Dalton, R. R. McDougal, and C. A. Gent, “Imaging spectroscopy: Earth and planetary remote sensing with the usgs tetracorder and expert systems,” Journal of Geophysical Research: Planets, vol. 108, no. E12, 2003.
 [29] B. Hapke, “Bidirectional reflectance spectroscopy, 1, Theory,” Journal of Geophysical Research, vol. 86, no. B4, pp. 3039–3054, 1981.
 [30] J. M. P. Nascimento and J. M. BioucasDias, “Vertex Component Analysis: A fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, April 2005.
 [31] R. A. Borsoi, T. Imbiriba, J. C. M. Bermudez, and C. Richard, “Tech report: A fast multiscale spatial regularization for sparse hyperspectral unmixing,” arXiv preprint arXiv:1712.01770, 2017.
 [32] R. A. Borsoi, T. Imbiriba, Bermudez, and J. C. M., “A data dependent multiscale model for hyperspectral unmixing with spectral variability,” IEEE Trans. on Image Processing (Submitted), 2018.
Comments
There are no comments yet.