Multimodal Generative Models for Compositional Representation Learning

12/11/2019
by   Mike Wu, et al.
29

As deep neural networks become more adept at traditional tasks, many of the most exciting new challenges concern multimodality—observations that combine diverse types, such as image and text. In this paper, we introduce a family of multimodal deep generative models derived from variational bounds on the evidence (data marginal likelihood). As part of our derivation we find that many previous multimodal variational autoencoders used objectives that do not correctly bound the joint marginal likelihood across modalities. We further generalize our objective to work with several types of deep generative model (VAE, GAN, and flow-based), and allow use of different model types for different modalities. We benchmark our models across many image, label, and text datasets, and find that our multimodal VAEs excel with and without weak supervision. Additional improvements come from use of GAN image models with VAE language models. Finally, we investigate the effect of language on learned image representations through a variety of downstream tasks, such as compositionally, bounding box prediction, and visual relation prediction. We find evidence that these image representations are more abstract and compositional than equivalent representations learned from only visual data.

READ FULL TEXT

page 15

page 34

page 35

research
07/05/2022

A survey of multimodal deep generative models

Multimodal learning is a framework for building models that make predict...
research
04/15/2022

Unconditional Image-Text Pair Generation with Multimodal Cross Quantizer

Though deep generative models have gained a lot of attention, most of th...
research
10/11/2022

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

We introduce ViLPAct, a novel vision-language benchmark for human activi...
research
11/07/2016

Joint Multimodal Learning with Deep Generative Models

We investigate deep generative models that can exchange multiple modalit...
research
10/08/2021

On the Limitations of Multimodal VAEs

Multimodal variational autoencoders (VAEs) have shown promise as efficie...
research
11/14/2020

Speech Prediction in Silent Videos using Variational Autoencoders

Understanding the relationship between the auditory and visual signals i...
research
09/27/2019

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...

Please sign up or login with your details

Forgot password? Click here to reset