PCLNet: A Practical Way for Unsupervised Deep PolSAR Representations and Few-Shot Classification

06/27/2020
by   Lamei Zhang, et al.
0

Deep learning and convolutional neural networks (CNNs) have made progress in polarimetric synthetic aperture radar (PolSAR) image classification over the past few years. However, a crucial issue has not been addressed, i.e., the requirement of CNNs for abundant labeled samples versus the insufficient human annotations of PolSAR images. It is well-known that following the supervised learning paradigm may lead to the overfitting of training data, and the lack of supervision information of PolSAR images undoubtedly aggravates this problem, which greatly affects the generalization performance of CNN-based classifiers in large-scale applications. To handle this problem, in this paper, learning transferrable representations from unlabeled PolSAR data through convolutional architectures is explored for the first time. Specifically, a PolSAR-tailored contrastive learning network (PCLNet) is proposed for unsupervised deep PolSAR representation learning and few-shot classification. Different from the utilization of optical processing methods, a diversity stimulation mechanism is constructed to narrow the application gap between optics and PolSAR. Beyond the conventional supervised methods, PCLNet develops an auxiliary pre-training phase based on the proxy objective of contrastive instance discrimination to learn useful representations from unlabeled PolSAR data. The acquired representations are transferred to the downstream task, i.e., few-shot PolSAR classification. Experiments on two widely used PolSAR benchmark datasets confirm the validity of PCLNet. Besides, this work may enlighten how to efficiently utilize the massive unlabeled PolSAR data to alleviate the greedy demands of CNN-based methods for human annotations.

READ FULL TEXT

page 1

page 2

page 7

page 8

page 9

page 11

page 13

page 15

research
09/25/2022

Collaboration of Pre-trained Models Makes Better Few-shot Learner

Few-shot classification requires deep neural networks to learn generaliz...
research
12/09/2020

One-Vote Veto: A Self-Training Strategy for Low-Shot Learning of a Task-Invariant Embedding to Diagnose Glaucoma

Convolutional neural networks (CNNs) are a promising technique for autom...
research
09/02/2022

IMG2IMU: Applying Knowledge from Large-Scale Images to IMU Applications via Contrastive Learning

Recent advances in machine learning showed that pre-training representat...
research
09/17/2022

Few-Shot Classification with Contrastive Learning

A two-stage training paradigm consisting of sequential pre-training and ...
research
02/18/2022

Towards better understanding and better generalization of few-shot classification in histology images with contrastive learning

Few-shot learning is an established topic in natural images for years, b...
research
04/26/2018

Competitive Learning Enriches Learning Representation and Accelerates the Fine-tuning of CNNs

In this study, we propose the integration of competitive learning into c...
research
08/13/2021

GeoCLR: Georeference Contrastive Learning for Efficient Seafloor Image Interpretation

This paper describes Georeference Contrastive Learning of visual Represe...

Please sign up or login with your details

Forgot password? Click here to reset