Recently, hyperspectral image classification has become a hot topic due to the plentiful information from the hundreds of spectral channels contained in the image . However, the great similarity occurred in the spectral bands of different objects makes the task be a challenge one. Moreover, the limited number of labelled samples in real-world applications increases the difficulty to obtain discriminative spectral features from the image. To overcome this problem, spatial information is usually incorporated into the representation to provide discriminative features. However, modelling the spatial and spectral directly with usual handcrafted features cannot capture the complex structure and high-level information within the image.
Deep models have shown powerful ability in describing the abstract and high-level information and presented remarkable performance in many computer vision tasks, such as the object detection, face recognition, as well as in the literature of hyperspectral image classification[2, 3]
. Many deep models, such as the deep belief networks and the convolutional neural networks [3, 4], have been applied for the hyperspectral image processing tasks. However, due to the limited and unbalanced training samples in the hyperspectral image, general training process of deep model for the hyperspectral image usually makes the learned models be sub-optimal.
To overcome this problem, metric learning which tries to maximize the inter-class variance while minimizing the intra-class variance is usually applied in the training process of the deep model for the hyperspectral image . Generally, the metric learning constructs the image pairs or triplet data to penalize the inter-class distance and the intra-class distance. By increasing the variance between samples from the same class and decreasing the variance between samples from different classes, the learned features can be more discriminative to separate different objects. However, general methods to implement the metric learning should construct the image pairs. Besides, the training process would be unbalanced due to the unbalance of training samples.
This work develops a novel statistical metric learning (SML) which increases the inter-class variance and decreases the intra-class variance from the statistical view. All the samples from the same class are looked as a distribution. The variance from each class is used to formulate the intra-class variance. The Euclidean distances between the sample means from different classes are used to measure the inter-class variance. Moreover, the variance between different sample means is added as a diversity regularization to repulse different classes from each other. The SML is easy to implement. Moreover, under the SML, the variance is measured from the class view which can balance the training process with unbalanced training samples.
Just as , this work jointly learns the developed SML and the softmax loss for hyperspectral image classification. The softmax loss tries to take advantage of the point-to-point information while the SML makes use of the class-wise information and further improves the representational ability of the learned features. Experimental results over two commonly used hyperspectral image have demonstrated the effectiveness of the developed method.
2 Proposed Method
Let us denote as the set of training samples of the hyperspectral image where is the number of training samples and as the label of the sample . where is the number of the sample classes.
2.1 General metric learning
Since convolutional neural networks (CNNs) have presented impressive results in hyperspectral image classification , as Fig. 1 shows, this work will choose the CNN model as  to extract features from the hyperspectral image. To further improve the representational ability of the hyperspectral image, the metric learning is incorporated in the deep learning process. Generally, metric learning calculates the loss to measure the inter-class difference and intra-class similarity to decrease the intra-class variance and penalize the inter-class variance, simultaneously. Therefore, the loss can be formulated as
where measures the penalization between different classes and calculates the penalization within each class. Contrastive loss and triplet loss are the commonly used metric learning methods.
Contrastive loss constructs the image pairs (including the positive pairs and the negative pairs) where the positive pair denotes images from the same class and the negative pair denotes images from different classes. The contrastive loss decreases the distances of the positive pairs and penalizes the negative pairs. It can be formulated as
where and is the feature of sample . is the margin. represents the indicative function.
Triplet loss constructs the triplet data where comes from the same class and is from different classes. The loss is formulated based on the triplet data,
These former metric learning methods usually require data preprocessing to construct the image pairs or triplet data. Besides, these methods commonly consider the sample correlation and ignore the class correlation. This would negatively affected the classification performance over unbalanced data especially for hyperspectral image. Therefore, to overcome these problems, this work will develop a novel statistical metric learning.
2.2 Statistical metric learning
Given a training batch . This work looks each class as a distribution and we will implement the metric learning from the statistical view. Denote as the feature of extracted from the deep model. represents the samples of the -th class in the batch. Then, denotes the extracted features of the -th class where is the number of the samples in the class.
The sample mean of the -th class in is calculated as
Then, the variance of the samples of the -th class in can be calculated as
Since the variance is a measure of how spread out a data set is, this work will take advantage of the variance of different classes in the batch to formulate the intra-class variance of the training batch. Then, can be formulated as
Besides, this work tries to enlarge the Euclidean distance between the sample means of different classes to enlarge the inter-class variance.
Moreover, denote as the center of all the classes in the batch
The variance of all the means of different classes is calculated as a diversity regularization to repulse different classes to each other. It can be formulated as
The statistical metric learning (SML) penalizes the variance of each class and the Euclidean distance between the sample means of different classes. Besides, the SML penalizes the variance between the sample means of different classes as the diversity term to repulse different classes from each other. Then, the loss can be formulated as
The SML calculates the inter-class and intra-class variance between the samples in the training batch and is easy to implement. Moreover, the SML measures the difference from the class view which can solve the unbalanced training from unbalanced data. It should also be noted that, as Fig. 1 shows, this work jointly learns the softmax loss and the proposed SML to take advantage of both the point-to-point correlation and the class-wise correlation.
2.3 Implementation of the proposed method
The model is trained by the stochastic gradient descent method and back propagation is used for the training process of the proposed method. Generally, the main problem for the training process is to compute the learning loss w.r.t. the training samples.
The partial of the softmax loss w.r.t.
can be computed as Caffe which is the deep framework used in this work (see for details). The partial of w.r.t. can be computed by
Besides, the partial of w.r.t. can be calculated as
The partial of w.r.t. can be calculated as
Through back propagation with the former equations, the CNN model can be trained and discriminative features can be learned from the hyperspectral image.
To further validate the effectiveness of the proposed statistical metric learning, this work conducts experiments over commonly used hyperspectral images, namely the Pavia University and the Indian Pines, and further compares the developed metric learning with other methods. Pavia University  consists of pixels with 115 bands ranging from 0.43 to 0.86 (See Fig. 2 for details). 103 channels are used for experiments due to the noise. 42,776 labelled samples which are divided into nine classes are selected. Indian Pines  consists of pixels with 224 spectral channels raning from 0.4 to 2.45 (See Fig. 3 for details). 24 spectral bands are removed due to the noise and the remainder are used for experiments. A total of 8598 labelled samples from eight classes are chosen from the image. In the experiments, 200 samples of each class are used for training and the remainder for testing.
is chosen to implement the deep learning framework. In the experiments, the learning rate and the training epoch is set to 0.001 and 40000, respectively. The tradeoff parametersand are set to 1, 0.01 and 0.001, separately. The loss weight of SML in the training process is set to 0.0002. The experimental results in the paper are from the mean and standard variance of ten runs of training and testing.
3.1 Classification Results
The classification results of the proposed method over Pavia University and Indian Pines are shown in table 1 and 2, respectively. From table 1, we can find that the CNN model which trained with general softmax loss can obtain over Pavia University dataset while the performance can achieve when trained with the proposed SML. As table 2 shows, for Indian Pines dataset, the CNN model can provide an accuracy of and , respectively. In conclusion, the proposed method can significantly improve the representational ability of the CNN model for hyperspectral image classification.
Besides, Fig. 4 and 5 also show the classification maps of SVM, CNN, and the proposed method over Pavia University and Indian Pines, separately. By comparisons of Figs. 4 and 5, it can be noted that, the deep model can significantly improve the performance. Besides, the proposed method can further improve the representational ability of the CNN model and discriminate the objects with highly overlappings.
3.2 Classification Performance With Different Number of Training Samples
To further validate the performance of the proposed method, in Fig. 6 we provide the classification accuracies of the CNN model with general softmax loss and the proposed SML over Pavia University and Indian Pines, respectively. It can be noted that the proposed SML method obtains better performance with different number of training samples. Interestingly, the proposed SML can provide a large improvement when compared with model trained with general softmax loss with less training samples. Since the SML considers the correlation from the class view, the number of samples shows less effects on the performance of SML. However, less samples can significantly affect the performance of softmax loss. Therefore, the SML can play a more important role in the performance with less training samples.
3.3 Comparisons with other State-of-the-Art Methods
To comprehensively show the effectiveness of the proposed method, we compare the developed method with several state-of-the-art methods.
Compared with shallow methods, we can find that the proposed method shows better performance than SIFT . Compared with deep methods, it can be noted from table 3 that the proposed method obtains better performance over Pavia University when compared with D-DBN-PF ()  and CNN ()  which are generally used deep model. Besides, from table 4, we can also find that the proposed method achieve better performance over Indian Pines when compared with D-DBN-PF () and CNN (). In conclusion, the proposed method can provide better performance when compared with both the shallow and deep methods over the hyperspectral image.
This work develops a novel statistical metric learning for hyperspectral image classification. The developed SML takes advantage of the sample variance of each class to calculate the intra-class variance. Moreover, the distances between the mean value of samples of each class are used to penalize the inter-class variance. In addition, the variance of the means of different classes is added as additional diversity regularization to repulse different classes from each other. Experimental results have demonstrated that the proposed method achieves better performance when compared with other state-of-the-art methods.
In further work, we would like to apply the proposed method to other remote sensing datasets. Moreover, other statistics which can measure the variance of the distribution is another direction to improve the performance of general deep learning.
-  P. Zhong, Z. Gong, S. Li, and C. B. Schonlieb, “Learning to diversify deep belief networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 6, pp. 3516–3530, 2017.
-  Y. Chen, X. Zhao, and X. Jia, “Spectral-spatial classification of hyperspectral data based on deep belief network,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2381–2392, 2015.
Y. Chen, H. Jiang, X. Jia, and P. Ghamisi,
“Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,”IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 10, pp. 6232–6251, 2017.
-  Z. Gong, P. Zhong, Y. Yu, W. Hu, and S. Li, “A cnn with multiscale convolution and diversified metric for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, 2019.
-  Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in European Conference on Computer Vision, 2016, pp. 499–515.
-  Y. Li, W. Xie, and H. Li, “Hyperspectral image reconstruction by deep convolutional neural network for classification,” Pattern Recognition, vol. 63, pp. 371–383, 2017.
-  S. S. Haykin, Neural Networks and Learning Machines, New York: Prentice Hall, 2009.
-  Y. Jia and et al., “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM International Conference on Multimedia. ACM, 2014, pp. 675–678.
-  “University of pavia dataset, accessed on may. 8, 2019,” http://www.ehu.ews/ccwintoco/index.php/title=Hyperspectral_Remote_Sensing_Scenes.
-  “Indian pines dataset, accessed on may. 8, 2019,” https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html.