CLLD: Contrastive Learning with Label Distance for Text Classificatioin

10/25/2021
by   Jinhe Lan, et al.
0

Existed pre-trained models have achieved state-of-the-art performance on various text classification tasks. These models have proven to be useful in learning universal language representations. However, the semantic discrepancy between similar texts cannot be effectively distinguished by advanced pre-trained models, which have a great influence on the performance of hard-to-distinguish classes. To address this problem, we propose a novel Contrastive Learning with Label Distance (CLLD) in this work. Inspired by recent advances in contrastive learning, we specifically design a classification method with label distance for learning contrastive classes. CLLD ensures the flexibility within the subtle differences that lead to different label assignments, and generates the distinct representations for each class having similarity simultaneously. Extensive experiments on public benchmarks and internal datasets demonstrate that our method improves the performance of pre-trained models on classification tasks. Importantly, our experiments suggest that the learned label distance relieve the adversarial nature of interclasses.

READ FULL TEXT
research
09/12/2021

Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification

Fine-grained classification involves dealing with datasets with larger n...
research
03/05/2023

PyramidFlow: High-Resolution Defect Contrastive Localization using Pyramid Normalizing Flow

During industrial processing, unforeseen defects may arise in products d...
research
03/12/2021

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

In this paper, we investigate whether the power of the models pre-traine...
research
01/26/2022

CodeRetriever: Unimodal and Bimodal Contrastive Learning

In this paper, we propose the CodeRetriever model, which combines the un...
research
07/01/2021

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

Despite pre-trained language models have proven useful for learning high...
research
05/16/2023

UOR: Universal Backdoor Attacks on Pre-trained Language Models

Backdoors implanted in pre-trained language models (PLMs) can be transfe...
research
10/02/2022

Fine-grained Contrastive Learning for Definition Generation

Recently, pre-trained transformer-based models have achieved great succe...

Please sign up or login with your details

Forgot password? Click here to reset