Self-supervised Learning is More Robust to Dataset Imbalance

10/11/2021
by   Hong Liu, et al.
4

Self-supervised learning (SSL) is a scalable way to learn general visual representations since it learns without labels. However, large-scale unlabeled datasets in the wild often have long-tailed label distributions, where we know little about the behavior of SSL. In this work, we systematically investigate self-supervised learning under dataset imbalance. First, we find out via extensive experiments that off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations. The performance gap between balanced and imbalanced pre-training with SSL is significantly smaller than the gap with supervised learning, across sample sizes, for both in-domain and, especially, out-of-domain evaluation. Second, towards understanding the robustness of SSL, we hypothesize that SSL learns richer features from frequent data: it may learn label-irrelevant-but-transferable features that help classify the rare classes and downstream tasks. In contrast, supervised learning has no incentive to learn features irrelevant to the labels from frequent examples. We validate this hypothesis with semi-synthetic experiments and theoretical analyses on a simplified setting. Third, inspired by the theoretical insights, we devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets with several evaluation criteria, closing the small gap between balanced and imbalanced datasets with the same number of examples.

READ FULL TEXT

page 8

page 18

research
08/25/2021

Learning From Long-Tailed Data With Noisy Labels

Class imbalance and noisy labels are the norm rather than the exception ...
research
06/13/2020

Rethinking the Value of Labels for Improving Class-Imbalanced Learning

Real-world data often exhibits long-tailed distributions with heavy clas...
research
12/22/2022

Offline Clustering Approach to Self-supervised Learning for Class-imbalanced Image Data

Class-imbalanced datasets are known to cause the problem of model being ...
research
07/28/2023

Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning

Large language models (LLMs) have shown remarkable capacity for in-conte...
research
06/29/2020

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Real-world large-scale datasets are heteroskedastic and imbalanced – lab...
research
10/31/2022

DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning

In Self-Supervised Learning (SSL), it is known that frequent occurrences...
research
04/07/2023

Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning

Linear probing (LP) (and k-NN) on the upstream dataset with labels (e.g....

Please sign up or login with your details

Forgot password? Click here to reset