Making Pre-trained Language Models Good Long-tailed Learners

05/11/2022
by   Chen Zhang, et al.
0

Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge. This motivates us to check the hypothesis that prompt-tuning is also a promising choice for long-tailed classification, since the tail classes are intuitively few-shot ones. To achieve this aim, we conduct empirical studies to examine the hypothesis. The results demonstrate that prompt-tuning exactly makes pre-trained language models at least good long-tailed learners. For intuitions on why prompt-tuning can achieve good performance in long-tailed classification, we carry out an in-depth analysis by progressively bridging the gap between prompt-tuning and commonly used fine-tuning. The summary is that the classifier structure and parameterization form the key to making good long-tailed learners, in comparison with the less important input structure. Finally, we verify the applicability of our finding to few-shot classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2023

Parameter-Efficient Long-Tailed Recognition

The "pre-training and fine-tuning" paradigm in addressing long-tailed re...
research
04/17/2022

Pathologies of Pre-trained Language Models in Few-shot Fine-tuning

Although adapting pre-trained language models with few examples has show...
research
12/21/2022

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

Through in-context learning (ICL), large-scale language models are effec...
research
08/03/2022

Efficient Fine-Tuning of Compressed Language Models with Learners

Fine-tuning BERT-based models is resource-intensive in memory, computati...
research
09/15/2020

The Devil is the Classifier: Investigating Long Tail Relation Classification with Decoupling Analysis

Long-tailed relation classification is a challenging problem as the head...
research
06/01/2023

Prompt Algebra for Task Composition

We investigate whether prompts learned independently for different tasks...

Please sign up or login with your details

Forgot password? Click here to reset