A Simple Long-Tailed Recognition Baseline via Vision-Language Model

11/29/2021
by   Teli Ma, et al.
9

The visual world naturally exhibits a long-tailed distribution of open classes, which poses great challenges to modern visual systems. Existing approaches either perform class re-balancing strategies or directly improve network modules to address the problem. However, they still train models with a finite set of predefined labels, limiting their supervision information and restricting their transferability to novel instances. Recent advances in large-scale contrastive visual-language pretraining shed light on a new pathway for visual recognition. With open-vocabulary supervisions, pretrained contrastive vision-language models learn powerful multimodal representations that are promising to handle data deficiency and unseen concepts. By calculating the semantic similarity between visual and text inputs, visual recognition is converted to a vision-language matching problem. Inspired by this, we propose BALLAD to leverage contrastive vision-language models for long-tailed recognition. We first continue pretraining the vision-language backbone through contrastive learning on a specific long-tailed target dataset. Afterward, we freeze the backbone and further employ an additional adapter layer to enhance the representations of tail classes on balanced training samples built with re-sampling strategies. Extensive experiments have been conducted on three popular long-tailed recognition benchmarks. As a result, our simple and effective approach sets the new state-of-the-art performances and outperforms competitive baselines with a large margin. Code is released at https://github.com/gaopengcuhk/BALLAD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

Decoupling Representation and Classifier for Long-Tailed Recognition

The long-tail distribution of the visual world poses great challenges fo...
research
10/19/2021

Improving Tail-Class Representation with Centroid Contrastive Learning

In vision domain, large-scale natural datasets typically exhibit long-ta...
research
09/09/2021

Self Supervision to Distillation for Long-Tailed Visual Recognition

Deep learning has achieved remarkable progress for visual recognition on...
research
11/26/2021

VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition

Deep learning-based models encounter challenges when processing long-tai...
research
07/26/2021

Parametric Contrastive Learning

In this paper, we propose Parametric Contrastive Learning (PaCo) to tack...
research
09/07/2023

The Devil is in the Tails: How Long-Tailed Code Distributions Impact Large Language Models

Learning-based techniques, especially advanced Large Language Models (LL...
research
03/11/2022

Democratizing Contrastive Language-Image Pre-training: A CLIP Benchmark of Data, Model, and Supervision

Contrastive Language-Image Pretraining (CLIP) has emerged as a novel par...

Please sign up or login with your details

Forgot password? Click here to reset