Adapting Self-Supervised Representations to Multi-Domain Setups

09/07/2023
by   Neha Kalibhat, et al.
0

Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains. We observe that these models poorly generalize even when trained on a mixture of domains, making them unsuitable to be deployed under diverse real-world setups. We therefore propose a general-purpose, lightweight Domain Disentanglement Module (DDM) that can be plugged into any self-supervised encoder to effectively perform representation learning on multiple, diverse domains with or without shared classes. During pre-training according to a self-supervised loss, DDM enforces a disentanglement in the representation space by splitting it into a domain-variant and a domain-invariant portion. When domain labels are not available, DDM uses a robust clustering approach to discover pseudo-domains. We show that pre-training with DDM can show up to 3.5 improvement in linear probing accuracy on state-of-the-art self-supervised models including SimCLR, MoCo, BYOL, DINO, SimSiam and Barlow Twins on multi-domain benchmarks including PACS, DomainNet and WILDS. Models trained with DDM show significantly improved generalization (7.4 compared to baselines. Therefore, DDM can efficiently adapt self-supervised encoders to provide high-quality, generalizable representations for diverse multi-domain data.

READ FULL TEXT

page 3

page 4

page 5

research
03/25/2021

Contrasting Contrastive Self-Supervised Representation Learning Models

In the past few years, we have witnessed remarkable breakthroughs in sel...
research
10/17/2021

Deep Clustering For General-Purpose Audio Representations

We introduce DECAR, a self-supervised pre-training approach for learning...
research
07/19/2022

Self-Supervision Can Be a Good Few-Shot Learner

Existing few-shot learning (FSL) methods rely on training with a large l...
research
10/18/2021

SentimentArcs: A Novel Method for Self-Supervised Sentiment Analysis of Time Series Shows SOTA Transformers Can Struggle Finding Narrative Arcs

SOTA Transformer and DNN short text sentiment classifiers report over 97...
research
10/11/2022

Self-supervised debiasing using low rank regularization

Spurious correlations can cause strong biases in deep neural networks, i...
research
06/02/2022

Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning

We propose a unifying view to analyze the representation quality of self...
research
09/18/2019

Towards Shape Biased Unsupervised Representation Learning for Domain Generalization

It is known that, without awareness of the process, our brain appears to...

Please sign up or login with your details

Forgot password? Click here to reset