Multi-Source Neural Variational Inference

11/11/2018
by   Richard Kurle, et al.
6

Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source's posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.

READ FULL TEXT

page 6

page 11

page 12

research
07/09/2017

Variational Inference via Transformations on Distributions

Variational inference methods often focus on the problem of efficient mo...
research
08/27/2020

Meta-Learning with Shared Amortized Variational Inference

We propose a novel amortized variational inference scheme for an empiric...
research
03/13/2023

Measuring Multi-Source Redundancy in Factor Graphs

Factor graphs are a ubiquitous tool for multi-source inference in roboti...
research
04/30/2020

Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Variational autoencoders (VAEs) are a standard framework for inducing la...
research
11/02/2022

FUNCK: Information Funnels and Bottlenecks for Invariant Representation Learning

Learning invariant representations that remain useful for a downstream t...
research
05/01/2010

Joint Structured Models for Extraction from Overlapping Sources

We consider the problem of jointly training structured models for extrac...
research
02/08/2021

DEFT: Distilling Entangled Factors

Disentanglement is a highly desirable property of representation due to ...

Please sign up or login with your details

Forgot password? Click here to reset