Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

03/03/2022
by   Weixin Liang, et al.
15

We present modality gap, an intriguing geometric phenomenon of the representation space of multi-modal models. Specifically, we show that different data modalities (e.g. images and text) are embedded at arm's length in their shared representation in multi-modal models such as CLIP. Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization. In model initialization, we show empirically and theoretically that the representation of a common deep neural network is restricted to a narrow cone. As a consequence, in a multi-modal model with two encoders, the representations of the two modalities are clearly apart when the model is initialized. During optimization, contrastive learning keeps the different modalities separate by a certain distance, which is influenced by the temperature parameter in the loss function. Our experiments further demonstrate that varying the modality gap distance has a significant impact in improving the model's downstream zero-shot classification performance and fairness. Our code and data are available at https://modalitygap.readthedocs.io/

READ FULL TEXT

page 2

page 3

page 6

page 12

page 13

page 14

research
09/02/2022

Multi-modal Contrastive Representation Learning for Entity Alignment

Multi-modal entity alignment aims to identify equivalent entities betwee...
research
03/10/2023

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Contrastive loss has been increasingly used in learning representations ...
research
11/21/2022

Unifying Vision-Language Representation Space with Single-tower Transformer

Contrastive learning is a form of distance learning that aims to learn i...
research
12/23/2016

DeMIAN: Deep Modality Invariant Adversarial Network

Obtaining common representations from different modalities is important ...
research
06/07/2019

Evolving Losses for Unlabeled Video Representation Learning

We present a new method to learn video representations from unlabeled da...
research
07/07/2023

CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution

Models leveraging both visual and textual data such as Contrastive Langu...
research
01/27/2021

Learning Abstract Representations through Lossy Compression of Multi-Modal Signals

A key competence for open-ended learning is the formation of increasingl...

Please sign up or login with your details

Forgot password? Click here to reset