ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot

08/05/2021
by   Jiarui Cai, et al.
0

One-stage long-tailed recognition methods improve the overall performance in a "seesaw" manner, i.e., either sacrifice the head's accuracy for better tail classification or elevate the head's accuracy even higher but ignore the tail. Existing algorithms bypass such trade-off by a multi-stage training process: pre-training on imbalanced set and fine-tuning on balanced set. Though achieving promising performance, not only are they sensitive to the generalizability of the pre-trained model, but also not easily integrated into other computer vision tasks like detection and segmentation, where pre-training of classifiers solely is not applicable. In this paper, we propose a one-stage long-tailed recognition scheme, ally complementary experts (ACE), where the expert is the most knowledgeable specialist in a sub-set that dominates its training, and is complementary to other experts in the less-seen categories without being disturbed by what it has never seen. We design a distribution-adaptive optimizer to adjust the learning pace of each expert to avoid over-fitting. Without special bells and whistles, the vanilla ACE outperforms the current one-stage SOTA method by 3-10 CIFAR100-LT, ImageNet-LT and iNaturalist datasets. It is also shown to be the first one to break the "seesaw" trade-off by improving the accuracy of the majority and minority categories simultaneously in only one stage. Code and trained models are at https://github.com/jrcai/ACE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2023

Parameter-Efficient Long-Tailed Recognition

The "pre-training and fine-tuning" paradigm in addressing long-tailed re...
research
05/22/2023

Boosting Long-tailed Object Detection via Step-wise Learning on Smooth-tail Data

Real-world data tends to follow a long-tailed distribution, where the cl...
research
03/07/2023

No One Left Behind: Improving the Worst Categories in Long-Tailed Learning

Unlike the case when using a balanced training dataset, the per-class re...
research
08/04/2023

Balanced Classification: A Unified Framework for Long-Tailed Object Detection

Conventional detectors suffer from performance degradation when dealing ...
research
04/04/2023

Exploring Vision-Language Models for Imbalanced Learning

Vision-Language models (VLMs) that use contrastive language-image pre-tr...
research
10/11/2022

The Equalization Losses: Gradient-Driven Training for Long-tailed Object Recognition

Long-tail distribution is widely spread in real-world applications. Due ...
research
12/29/2021

Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification

We address the overlooked unbiasedness in existing long-tailed classific...

Please sign up or login with your details

Forgot password? Click here to reset