DeepAI AI Chat
Log In Sign Up

MCSE: Multimodal Contrastive Learning of Sentence Embeddings

by   Miaoran Zhang, et al.
Universität Saarland

Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman's correlation by 1.7 By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance.


page 1

page 2

page 3

page 4


Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings

Semantic representation learning for sentences is an important and well-...

Composition-contrastive Learning for Sentence Embeddings

Vector representations of natural language are ubiquitous in search appl...

Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case

Semantic embeddings have advanced the state of the art for countless nat...

Improving Context-Aware Semantic Relationships in Sparse Mobile Datasets

Traditional semantic similarity models often fail to encapsulate the ext...

Reweighting Strategy based on Synthetic Data Identification for Sentence Similarity

Semantically meaningful sentence embeddings are important for numerous t...

LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias

Textual noise, such as typos or abbreviations, is a well-known issue tha...

Code Repositories


NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings

view repo