Interventional Contrastive Learning with Meta Semantic Regularizer

06/29/2022
by   Wenwen Qiang, et al.
0

Contrastive learning (CL)-based self-supervised learning models learn visual representations in a pairwise manner. Although the prevailing CL model has achieved great progress, in this paper, we uncover an ever-overlooked phenomenon: When the CL model is trained with full images, the performance tested in full images is better than that in foreground areas; when the CL model is trained with foreground areas, the performance tested in full images is worse than that in foreground areas. This observation reveals that backgrounds in images may interfere with the model learning semantic information and their influence has not been fully eliminated. To tackle this issue, we build a Structural Causal Model (SCM) to model the background as a confounder. We propose a backdoor adjustment-based regularization method, namely Interventional Contrastive Learning with Meta Semantic Regularizer (ICL-MSR), to perform causal intervention towards the proposed SCM. ICL-MSR can be incorporated into any existing CL methods to alleviate background distractions from representation learning. Theoretically, we prove that ICL-MSR achieves a tighter error bound. Empirically, our experiments on multiple benchmark datasets demonstrate that ICL-MSR is able to improve the performances of different state-of-the-art CL methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2023

Background Debiased SAR Target Recognition via Causal Interventional Regularizer

Recent studies have utilized deep learning (DL) techniques to automatica...
research
08/18/2022

Robust Causal Graph Representation Learning against Confounding Effects

The prevailing graph neural network models have achieved significant pro...
research
09/30/2021

Motion-aware Self-supervised Video Representation Learning via Foreground-background Merging

In light of the success of contrastive learning in the image domain, cur...
research
09/19/2022

A Causal Intervention Scheme for Semantic Segmentation of Quasi-periodic Cardiovascular Signals

Precise segmentation is a vital first step to analyze semantic informati...
research
11/23/2020

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

Self-supervised representation learning has witnessed significant leaps ...
research
07/14/2022

Benchmarking Omni-Vision Representation through the Lens of Visual Realms

Though impressive performance has been achieved in specific visual realm...
research
12/06/2020

Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models

Deep neural networks have shown significant promise in comprehending com...

Please sign up or login with your details

Forgot password? Click here to reset