Self-supervised debiasing using low rank regularization

10/11/2022
by   Geon Yeong Park, et al.
0

Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability. While most of existing debiasing methods require full supervisions on either spurious attributes or target labels, training a debiased model from a limited amount of both annotations is still an open issue. To overcome such limitations, we first examined an interesting phenomenon by the spectral analysis of latent representations: spuriously correlated, easy-to-learn attributes make neural networks inductively biased towards encoding lower effective rank representations. We also show that a rank regularization can amplify this bias in a way that encourages highly correlated features. Motivated by these observations, we propose a self-supervised debiasing framework that is potentially compatible with unlabeled samples. We first pretrain a biased encoder in a self-supervised manner with the rank regularization, serving as a semantic bottleneck to enforce the encoder to learn the spuriously correlated attributes. This biased encoder is then used to discover and upweight bias-conflicting samples in a downstream task, serving as a boosting to effectively debias the main model. Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines and, in some cases, even outperforms state-of-the-art supervised debiasing approaches.

READ FULL TEXT

page 3

page 15

page 18

research
03/03/2022

Understanding Failure Modes of Self-Supervised Learning

Self-supervised learning methods have shown impressive results in downst...
research
07/06/2020

Learning from Failure: Training Debiased Classifier from Biased Classifier

Neural networks often learn to make predictions that overly rely on spur...
research
09/07/2023

Adapting Self-Supervised Representations to Multi-Domain Setups

Current state-of-the-art self-supervised approaches, are effective when ...
research
06/01/2022

Self-supervised Learning for Label Sparsity in Computational Drug Repositioning

The computational drug repositioning aims to discover new uses for marke...
research
10/28/2022

Elastic Weight Consolidation Improves the Robustness of Self-Supervised Learning Methods under Transfer

Self-supervised representation learning (SSL) methods provide an effecti...
research
10/11/2022

Efficient debiasing with contrastive weight pruning

Neural networks are often biased to spuriously correlated features that ...
research
02/21/2022

Toward more generalized Malicious URL Detection Models

This paper reveals a data bias issue that can severely affect the perfor...

Please sign up or login with your details

Forgot password? Click here to reset