Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations

10/29/2022
by   Yu Fei, et al.
0

Recent work has demonstrated that pre-trained language models (PLMs) are zero-shot learners. However, most existing zero-shot methods involve heavy human engineering or complicated self-training pipelines, hindering their application to new situations. In this work, we show that zero-shot text classification can be improved simply by clustering texts in the embedding spaces of PLMs. Specifically, we fit the unlabeled texts with a Bayesian Gaussian Mixture Model after initializing cluster positions and shapes using class names. Despite its simplicity, this approach achieves superior or comparable performance on both topic and sentiment classification datasets and outperforms prior works significantly on unbalanced datasets. We further explore the applicability of our clustering approach by evaluating it on 14 datasets with more diverse topics, text lengths, and numbers of classes. Our approach achieves an average of 20 zero-shot learning. Finally, we compare different PLM embedding spaces and find that texts are well-clustered by topics even if the PLM is not explicitly pre-trained to generate meaningful sentence embeddings. This work indicates that PLM embeddings can categorize texts without task-specific fine-tuning, thus providing a new way to analyze and utilize their knowledge and zero-shot learning ability.

READ FULL TEXT
research
12/14/2022

Pre-trained Language Models can be Fully Zero-Shot Learners

How can we extend a pre-trained model to many language understanding tas...
research
08/14/2023

Approximating Human-Like Few-shot Learning with GPT-based Compression

In this work, we conceptualize the learning process as information compr...
research
05/25/2023

Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts

Recent studies have demonstrated that natural-language prompts can help ...
research
09/30/2022

PART: Pre-trained Authorship Representation Transformer

Authors writing documents imprint identifying information within their t...
research
05/20/2022

Prototypical Calibration for Few-shot Learning of Language Models

In-context learning of GPT-like models has been recognized as fragile ac...
research
10/25/2022

OpenStance: Real-world Zero-shot Stance Detection

Prior studies of zero-shot stance detection identify the attitude of tex...
research
10/17/2022

Zero-Shot Ranking Socio-Political Texts with Transformer Language Models to Reduce Close Reading Time

We approach the classification problem as an entailment problem and appl...

Please sign up or login with your details

Forgot password? Click here to reset