M^2VAE - Derivation of a Multi-Modal Variational Autoencoder Objective from the Marginal Joint Log-Likelihood

03/18/2019
by   Timo Korthals, et al.
0

This work gives an in-depth derivation of the trainable evidence lower bound obtained from the marginal joint log-Likelihood with the goal of training a Multi-Modal Variational Autoencoder (M^2VAE).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2019

A Perceived Environment Design using a Multi-Modal Variational Autoencoder for learning Active-Sensing

This contribution comprises the interplay between a multi-modal variatio...
research
02/07/2022

Multi-modal data generation with a deep metric variational autoencoder

We present a deep metric variational autoencoder for multi-modal data ge...
research
09/22/2022

FusionVAE: A Deep Hierarchical Variational Autoencoder for RGB Image Fusion

Sensor fusion can significantly improve the performance of many computer...
research
09/25/2020

Hierarchical Sparse Variational Autoencoder for Text Encoding

In this paper we focus on unsupervised representation learning and propo...
research
09/01/2023

Learning multi-modal generative models with permutation-invariant encoders and tighter variational bounds

Devising deep latent variable models for multi-modal data has been a lon...
research
06/28/2021

Dizygotic Conditional Variational AutoEncoder for Multi-Modal and Partial Modality Absent Few-Shot Learning

Data augmentation is a powerful technique for improving the performance ...
research
06/06/2022

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

Variational autoencoders (VAEs) are a popular framework for modeling com...

Please sign up or login with your details

Forgot password? Click here to reset