Contrastive Multimodal Fusion with TupleInfoNCE

07/06/2021
by   Yunze Liu, et al.
7

This paper proposes a method for representation learning of multimodal data using contrastive losses. A traditional approach is to contrast different modalities to learn the information shared between them. However, that approach could fail to learn the complementary synergies between modalities that might be useful for downstream tasks. Another approach is to concatenate all the modalities into a tuple and then contrast positive and negative tuple correspondences. However, that approach could consider only the stronger modalities while ignoring the weaker ones. To address these issues, we propose a novel contrastive learning objective, TupleInfoNCE. It contrasts tuples based not only on positive and negative correspondences but also by composing new negative tuples using modalities describing different scenes. Training with these additional negatives encourages the learning model to examine the correspondences among modalities in the same tuple, ensuring that weak modalities are not ignored. We provide a theoretical justification based on mutual information for why this approach works, and we propose a sample optimization algorithm to generate positive and negative samples to maximize training efficacy. We find that TupleInfoNCE significantly outperforms the previous state of the arts on three different downstream tasks.

READ FULL TEXT
research
05/26/2022

Triangular Contrastive Learning on Molecular Graphs

Recent contrastive learning methods have shown to be effective in variou...
research
06/12/2020

Video Understanding as Machine Translation

With the advent of large-scale multimodal video datasets, especially seq...
research
11/16/2022

Keep Your Friends Close Enemies Farther: Debiasing Contrastive Learning with Spatial Priors in 3D Radiology Images

Understanding of spatial attributes is central to effective 3D radiology...
research
01/25/2021

Understanding and Achieving Efficient Robustness with Adversarial Contrastive Learning

Contrastive learning (CL) has recently emerged as an effective approach ...
research
10/05/2020

Conditional Negative Sampling for Contrastive Learning of Visual Representations

Recent methods for learning unsupervised visual representations, dubbed ...
research
06/08/2023

Factorized Contrastive Learning: Going Beyond Multi-view Redundancy

In a wide range of multimodal tasks, contrastive learning has become a p...
research
03/07/2023

Stabilized training of joint energy-based models and their practical applications

The recently proposed Joint Energy-based Model (JEM) interprets discrimi...

Please sign up or login with your details

Forgot password? Click here to reset