DeepAI AI Chat
Log In Sign Up

Modified Diversity of Class Probability Estimation Co-training for Hyperspectral Image Classification

by   Yan Ju, et al.

Due to the limited amount and imbalanced classes of labeled training data, the conventional supervised learning can not ensure the discrimination of the learned feature for hyperspectral image (HSI) classification. In this paper, we propose a modified diversity of class probability estimation (MDCPE) with two deep neural networks to learn spectral-spatial feature for HSI classification. In co-training phase, recurrent neural network (RNN) and convolutional neural network (CNN) are utilized as two learners to extract features from labeled and unlabeled data. Based on the extracted features, MDCPE selects most credible samples to update initial labeled data by combining k-means clustering with the traditional diversity of class probability estimation (DCPE) co-training. In this way, MDCPE can keep new labeled data class-balanced and extract discriminative features for both the minority and majority classes. During testing process, classification results are acquired by co-decision of the two learners. Experimental results demonstrate that the proposed semi-supervised co-training method can make full use of unlabeled information to enhance generality of the learners and achieve favorable accuracies on all three widely used data sets: Salinas, Pavia University and Pavia Center.


page 5

page 10

page 11

page 12


Ladder Networks for Semi-Supervised Hyperspectral Image Classification

We used the Ladder Network [Rasmus et al. (2015)] to perform Hyperspectr...

Exploiting Unlabeled Data to Enhance Ensemble Diversity

Ensemble learning aims to improve generalization ability by using multip...

Learning by Association - A versatile semi-supervised training method for neural networks

In many real-world scenarios, labeled data for a specific machine learni...

Multitask deep learning with spectral knowledge for hyperspectral image classification

In this letter, we introduce multitask learning to hyperspectral image c...

Discriminative Consistent Domain Generation for Semi-supervised Learning

Deep learning based task systems normally rely on a large amount of manu...

Self-training Converts Weak Learners to Strong Learners in Mixture Models

We consider a binary classification problem when the data comes from a m...

Addressing Missing Sources with Adversarial Support-Matching

When trained on diverse labeled data, machine learning models have prove...