Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

03/10/2023
by   Qian Jiang, et al.
0

Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to exactly match each other in the latent space. Yet it remains an open question how the modality alignment affects the downstream task performance. In this paper, based on an information-theoretic argument, we first prove that exact modality alignment is sub-optimal in general for downstream prediction tasks. Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment. To this end, we propose three general approaches to construct latent modality structures. Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization. Extensive experiments are conducted on two popular multi-modal representation learning frameworks: the CLIP-based two-tower model and the ALBEF-based fusion model. We test our model on a variety of tasks including zero/few-shot image classification, image-text retrieval, visual question answering, visual reasoning, and visual entailment. Our method achieves consistent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on latent modality structure regularization.

READ FULL TEXT
research
03/03/2022

Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

We present modality gap, an intriguing geometric phenomenon of the repre...
research
02/25/2022

On Modality Bias Recognition and Reduction

Making each modality in multi-modal data contribute is of vital importan...
research
03/08/2022

Mutual Contrastive Learning to Disentangle Whole Slide Image Representations for Glioma Grading

Whole slide images (WSI) provide valuable phenotypic information for his...
research
05/22/2023

Connecting Multi-modal Contrastive Representations

Multi-modal Contrastive Representation (MCR) learning aims to encode dif...
research
05/07/2020

COBRA: Contrastive Bi-Modal Representation Algorithm

There are a wide range of applications that involve multi-modal data, su...
research
03/25/2022

Versatile Multi-Modal Pre-Training for Human-Centric Perception

Human-centric perception plays a vital role in vision and graphics. But ...
research
07/12/2023

Unified Molecular Modeling via Modality Blending

Self-supervised molecular representation learning is critical for molecu...

Please sign up or login with your details

Forgot password? Click here to reset