Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings

09/20/2022
by   Yiren Jian, et al.
0

Semantic representation learning for sentences is an important and well-studied problem in NLP. The current trend for this task involves training a Transformer-based sentence encoder through a contrastive objective with text, i.e., clustering sentences with semantically similar meanings and scattering others. In this work, we find the performance of Transformer models as sentence encoders can be improved by training with multi-modal multi-task losses, using unpaired examples from another modality (e.g., sentences and unrelated image/audio data). In particular, besides learning by the contrastive loss on text, our model clusters examples from a non-linguistic domain (e.g., visual/audio) with a similar contrastive loss at the same time. The reliance of our framework on unpaired non-linguistic data makes it language-agnostic, enabling it to be widely applicable beyond English NLP. Experiments on 7 semantic textual similarity benchmarks reveal that models trained with the additional non-linguistic (images/audio) contrastive objective lead to higher quality sentence embeddings. This indicates that Transformer models are able to generalize better by doing a similar task (i.e., clustering) with unpaired examples from different modalities in a multi-task fashion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2022

MCSE: Multimodal Contrastive Learning of Sentence Embeddings

Learning semantically meaningful sentence embeddings is an open problem ...
research
09/27/2022

Regularized Contrastive Learning of Semantic Search

Semantic search is an important task which objective is to find the rele...
research
10/14/2021

Transferring Semantic Knowledge Into Language Encoders

We introduce semantic form mid-tuning, an approach for transferring sema...
research
09/29/2021

Contrastive Video-Language Segmentation

We focus on the problem of segmenting a certain object referred by a nat...
research
09/30/2021

Focused Contrastive Training for Test-based Constituency Analysis

We propose a scheme for self-training of grammaticality models for const...
research
05/09/2023

StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure

This work explores the utility of explicit structure for representation ...
research
08/07/2023

A Hybrid CNN-Transformer Architecture with Frequency Domain Contrastive Learning for Image Deraining

Image deraining is a challenging task that involves restoring degraded i...

Please sign up or login with your details

Forgot password? Click here to reset