Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling

11/01/2021
by   Ziyu Jiang, et al.
0

Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes. That implies a tantalizing possibility of scaling them up beyond a curated "seed" benchmark, to incorporating more unlabeled images from the internet-scale external sources to enhance its performance. However, in practice, larger amount of unlabeled data will require more computing resources due to the bigger model size and longer training needed. Moreover, open-world unlabeled data usually follows an implicit long-tail class or attribute distribution, many of which also do not belong to the target classes. Blindly leveraging all unlabeled data hence can lead to the data imbalance as well as distraction issues. This motivates us to seek a principled approach to strategically select unlabeled data from an external source, in order to learn generalizable, balanced and diverse representations for relevant classes. In this work, we present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK), which follows three simple principles: (1) tailness, which encourages sampling of examples from tail classes, by sorting the empirical contrastive loss expectation (ECLE) of samples over random data augmentations; (2) proximity, which rejects the out-of-distribution outliers that may distract training; and (3) diversity, which ensures diversity in the set of sampled examples. Empirically, using ImageNet-100-LT (without labels) as the seed dataset and two "noisy" external data sources, we demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features, as evaluated via linear classifier evaluation on full-shot and few-shot settings. The code is available at: <https://github.com/VITA-Group/MAK>

READ FULL TEXT
research
06/06/2021

Self-Damaging Contrastive Learning

The recent breakthrough achieved by contrastive learning accelerates the...
research
05/23/2022

ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node Classification

Graph contrastive learning (GCL) has attracted a surge of attention due ...
research
05/01/2021

Semi-supervised Long-tailed Recognition using Alternate Sampling

Main challenges in long-tailed recognition come from the imbalanced data...
research
06/15/2022

Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

Class-incremental learning (CIL) suffers from the notorious dilemma betw...
research
10/15/2021

Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science

Deep learning techniques have been increasingly applied to the natural s...
research
11/10/2021

Feature Generation for Long-tail Classification

The visual world naturally exhibits an imbalance in the number of object...
research
06/07/2021

Enabling On-Device Self-Supervised Contrastive Learning With Selective Data Contrast

After a model is deployed on edge devices, it is desirable for these dev...

Please sign up or login with your details

Forgot password? Click here to reset