False: False Negative Samples Aware Contrastive Learning for Semantic Segmentation of High-Resolution Remote Sensing Image

11/15/2022
by   Zhaoyang Zhang, et al.
0

The existing SSCL of RSI is built based on constructing positive and negative sample pairs. However, due to the richness of RSI ground objects and the complexity of the RSI contextual semantics, the same RSI patches have the coexistence and imbalance of positive and negative samples, which causing the SSCL pushing negative samples far away while pushing positive samples far away, and vice versa. We call this the sample confounding issue (SCI). To solve this problem, we propose a False negAtive sampLes aware contraStive lEarning model (FALSE) for the semantic segmentation of high-resolution RSIs. Since the SSCL pretraining is unsupervised, the lack of definable criteria for false negative sample (FNS) leads to theoretical undecidability, we designed two steps to implement the FNS approximation determination: coarse determination of FNS and precise calibration of FNS. We achieve coarse determination of FNS by the FNS self-determination (FNSD) strategy and achieve calibration of FNS by the FNS confidence calibration (FNCC) loss function. Experimental results on three RSI semantic segmentation datasets demonstrated that the FALSE effectively improves the accuracy of the downstream RSI semantic segmentation task compared with the current three models, which represent three different types of SSCL models. The mean Intersection-over-Union on ISPRS Potsdam dataset is improved by 0.7% on average; on CVPR DGLC dataset is improved by 12.28% on average; and on Xiangtan dataset this is improved by 1.17% on average. This indicates that the SSCL model has the ability to self-differentiate FNS and that the FALSE effectively mitigates the SCI in self-supervised contrastive learning. The source code is available at https://github.com/GeoX-Lab/FALSE.

READ FULL TEXT
research
06/28/2023

GraSS: Contrastive Learning with Gradient Guided Sampling Strategy for Remote Sensing Image Semantic Segmentation

Self-supervised contrastive learning (SSCL) has achieved significant mil...
research
11/24/2022

Contrastive pretraining for semantic segmentation is robust to noisy positive pairs

Domain-specific variants of contrastive learning can construct positive ...
research
03/10/2023

Self-supervised Training Sample Difficulty Balancing for Local Descriptor Learning

In the case of an imbalance between positive and negative samples, hard ...
research
05/09/2023

MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues

Self-supervised methods based on contrastive learning have achieved grea...
research
07/04/2022

Positive-Negative Equal Contrastive Loss for Semantic Segmentation

The contextual information is critical for various computer vision tasks...
research
11/02/2022

Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity

Most of the existing learning-based deraining methods are supervisedly t...
research
07/12/2023

Contrastive Learning for Conversion Rate Prediction

Conversion rate (CVR) prediction plays an important role in advertising ...

Please sign up or login with your details

Forgot password? Click here to reset