Debiased Contrastive Learning of Unsupervised Sentence Representations

05/02/2022
by   Kun Zhou, et al.
0

Recently, contrastive learning has been shown to be effective in improving pre-trained language models (PLM) to derive high-quality sentence representations. It aims to pull close positive examples to enhance the alignment while push apart irrelevant negatives for the uniformity of the whole representation space. However, previous works mostly adopt in-batch negatives or sample from training data at random. Such a way may cause the sampling bias that improper negatives (e.g. false negatives and anisotropy representations) are used to learn sentence representations, which will hurt the uniformity of the representation space. To address it, we present a new framework DCLR (Debiased Contrastive Learning of unsupervised sentence Representations) to alleviate the influence of these improper negatives. In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space. Experiments on seven semantic textual similarity tasks show that our approach is more effective than competitive baselines. Our code and data are publicly available at the link: <https://github.com/RUCAIBox/DCLR>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2022

Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for Unsupervised Sentence Embedding

Contrastive learning has become a new paradigm for unsupervised sentence...
research
09/14/2023

DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning in the Debiasing Perspective

Several prior studies have suggested that word frequency biases can caus...
research
03/14/2022

Deep Continuous Prompt for Contrastive Learning of Sentence Embeddings

The performance of sentence representation has been remarkably improved ...
research
05/28/2023

Whitening-based Contrastive Learning of Sentence Embeddings

This paper presents a whitening-based contrastive learning method for se...
research
07/31/2023

Scaling Sentence Embeddings with Large Language Models

Large language models (LLMs) have recently garnered significant interest...
research
10/17/2022

Correlation between Alignment-Uniformity and Performance of Dense Contrastive Representations

Recently, dense contrastive learning has shown superior performance on d...
research
01/26/2022

Pair-Level Supervised Contrastive Learning for Natural Language Inference

Natural language inference (NLI) is an increasingly important task for n...

Please sign up or login with your details

Forgot password? Click here to reset