Metadata-Induced Contrastive Learning for Zero-Shot Multi-Label Text Classification

02/11/2022
by   Yu Zhang, et al.
0

Large-scale multi-label text classification (LMTC) aims to associate a document with its relevant labels from a large candidate set. Most existing LMTC approaches rely on massive human-annotated training data, which are often costly to obtain and suffer from a long-tailed label distribution (i.e., many labels occur only a few times in the training set). In this paper, we study LMTC under the zero-shot setting, which does not require any annotated documents with labels and only relies on label surface names and descriptions. To train a classifier that calculates the similarity score between a document and a label, we propose a novel metadata-induced contrastive learning (MICoL) method. Different from previous text-based contrastive learning techniques, MICoL exploits document metadata (e.g., authors, venues, and references of research papers), which are widely available on the Web, to derive similar document-document pairs. Experimental results on two large-scale datasets show that: (1) MICoL significantly outperforms strong zero-shot text classification and contrastive learning baselines; (2) MICoL is on par with the state-of-the-art supervised metadata-aware LMTC method trained on 10K-200K labeled documents; and (3) MICoL tends to predict more infrequent labels than supervised methods, thus alleviates the deteriorated performance on long-tailed labels.

READ FULL TEXT
research
05/24/2023

PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

We present PESCO, a novel contrastive learning framework that substantia...
research
02/15/2021

MATCH: Metadata-Aware Text Classification in A Large Hierarchy

Multi-label text classification refers to the problem of assigning each ...
research
12/16/2021

Extreme Zero-Shot Learning for Extreme Text Classification

The eXtreme Multi-label text Classification (XMC) problem concerns findi...
research
02/19/2023

Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset

Real-world data usually exhibits a long-tailed distribution,with a few f...
research
10/11/2022

Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents

We investigate semi-structured document classification in a zero-shot se...
research
04/24/2023

Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT

Moreover, GPT-based zero-shot classification models tend to make indepen...
research
04/19/2023

Shuffle Divide: Contrastive Learning for Long Text

We propose a self-supervised learning method for long text documents bas...

Please sign up or login with your details

Forgot password? Click here to reset