Identifiability Results for Multimodal Contrastive Learning

03/16/2023
by   Imant Daunhawer, et al.
0

Contrastive learning is a cornerstone underlying recent progress in multi-view and multimodal learning, e.g., in representation learning with image/caption pairs. While its effectiveness is not yet fully understood, a line of recent work reveals that contrastive learning can invert the data generating process and recover ground truth latent factors shared between views. In this work, we present new identifiability results for multimodal contrastive learning, showing that it is possible to recover shared factors in a more general setup than the multi-view setting studied previously. Specifically, we distinguish between the multi-view setting with one generative mechanism (e.g., multiple cameras of the same type) and the multimodal setting that is characterized by distinct mechanisms (e.g., cameras and microphones). Our work generalizes previous identifiability results by redefining the generative process in terms of distinct mechanisms with modality-specific latent variables. We prove that contrastive learning can block-identify latent factors shared between modalities, even when there are nontrivial dependencies between factors. We empirically verify our identifiability results with numerical simulations and corroborate our findings on a complex multimodal dataset of image/text pairs. Zooming out, our work provides a theoretical basis for multimodal representation learning and explains in which settings multimodal contrastive learning can be effective in practice.

READ FULL TEXT

page 7

page 18

research
10/28/2022

Improving the Modality Representation with Multi-View Contrastive Learning for Multimodal Sentiment Analysis

Modality representation learning is an important problem for multimodal ...
research
01/19/2022

TriCoLo: Trimodal Contrastive Loss for Fine-grained Text to Shape Retrieval

Recent work on contrastive losses for learning joint embeddings over mul...
research
06/08/2023

Factorized Contrastive Learning: Going Beyond Multi-view Redundancy

In a wide range of multimodal tasks, contrastive learning has become a p...
research
02/17/2020

Learning Robust Representations via Multi-View Information Bottleneck

The information bottleneck principle provides an information-theoretic m...
research
10/29/2021

Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning

A key goal of unsupervised representation learning is "inverting" a data...
research
02/13/2023

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

Language-supervised vision models have recently attracted great attentio...
research
03/26/2015

Generalized K-fan Multimodal Deep Model with Shared Representations

Multimodal learning with deep Boltzmann machines (DBMs) is an generative...

Please sign up or login with your details

Forgot password? Click here to reset