Deep Continuous Prompt for Contrastive Learning of Sentence Embeddings

03/14/2022
by   Yuxin Jiang, et al.
0

The performance of sentence representation has been remarkably improved by the framework of contrastive learning. However, recent works still require full fine-tuning, which is quite inefficient for large-scaled pre-trained language models. To this end, we present a novel method which freezes the whole language model and only optimizes the prefix deep continuous prompts. It not only tunes around 0.1 cumbersome computation of searching handcrafted prompts. Experimental results show that our proposed DCPCSE outperforms the state-of-the-art method SimCSE by a large margin. We raise the performance of unsupervised BERT_base and supervised RoBERTa_large by 2.24 and 1.00 points, respectively. Our code is publicly avaliable at https://github.com/YJiangcm/DCPCSE

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset