Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts

01/18/2021
by   Svetlana Kutuzova, et al.
23

Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we evaluate a family of product-of-experts (PoE) based variational autoencoders that have these desired properties. We include a novel PoE based architecture and training procedure. An empirical evaluation shows that the PoE based models can outperform an additive mixture-of-experts (MoE) approach. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities while MoEs are more suited for a disjunctive fusion.

READ FULL TEXT

page 5

page 9

page 14

page 15

research
11/08/2019

Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models

Learning generative models that span multiple data modalities, such as v...
research
05/25/2023

Score-Based Multimodal Autoencoders

Multimodal Variational Autoencoders (VAEs) represent a promising group o...
research
05/19/2023

Improving Multimodal Joint Variational Autoencoders through Normalizing Flows and Correlation Analysis

We propose a new multimodal variational autoencoder that enables to gene...
research
10/25/2020

An empirical study of domain-agnostic semi-supervised learning via energy-based models: joint-training and pre-training

A class of recent semi-supervised learning (SSL) methods heavily rely on...
research
11/01/2019

Variational Autoencoders for Generative Modelling of Water Cherenkov Detectors

Matter-antimatter asymmetry is one of the major unsolved problems in phy...
research
06/07/2023

Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications

In many machine learning systems that jointly learn from multiple modali...
research
09/07/2022

Benchmarking Multimodal Variational Autoencoders: GeBiD Dataset and Toolkit

Multimodal Variational Autoencoders (VAEs) have been a subject of intens...

Please sign up or login with your details

Forgot password? Click here to reset