Meta Feature Modulator for Long-tailed Recognition

08/08/2020
by   Renzhen Wang, et al.
0

Deep neural networks often degrade significantly when training data suffer from class imbalance problems. Existing approaches, e.g., re-sampling and re-weighting, commonly address this issue by rearranging the label distribution of training data to train the networks fitting well to the implicit balanced label distribution. However, most of them hinder the representative ability of learned features due to insufficient use of intra/inter-sample information of training data. To address this issue, we propose meta feature modulator (MFM), a meta-learning framework to model the difference between the long-tailed training data and the balanced meta data from the perspective of representation learning. Concretely, we employ learnable hyper-parameters (dubbed modulation parameters) to adaptively scale and shift the intermediate features of classification networks, and the modulation parameters are optimized together with the classification network parameters guided by a small amount of balanced meta data. We further design a modulator network to guide the generation of the modulation parameters, and such a meta-learner can be readily adapted to train the classification network on other long-tailed datasets. Extensive experiments on benchmark vision datasets substantiate the superiority of our approach on long-tailed recognition tasks beyond other state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2021

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

Real-world training data usually exhibits long-tailed distribution, wher...
research
07/21/2020

Balanced Meta-Softmax for Long-Tailed Visual Recognition

Deep classifiers have achieved great success in visual recognition. Howe...
research
08/22/2022

Meta-Causal Feature Learning for Out-of-Distribution Generalization

Causal inference has become a powerful tool to handle the out-of-distrib...
research
09/18/2019

Meta-Neighborhoods

Traditional methods for training neural networks use training data just ...
research
07/29/2020

Meta-LR-Schedule-Net: Learned LR Schedules that Scale and Generalize

The learning rate (LR) is one of the most important hyper-parameters in ...
research
03/24/2020

Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective

Object frequency in the real world often follows a power law, leading to...
research
03/06/2020

Meta-SVDD: Probabilistic Meta-Learning for One-Class Classification in Cancer Histology Images

To train a robust deep learning model, one usually needs a balanced set ...

Please sign up or login with your details

Forgot password? Click here to reset