Disentangled Variational Information Bottleneck for Multiview Representation Learning

05/17/2021
by   Feng Bao, et al.
0

Multiview data contain information from multiple modalities and have potentials to provide more comprehensive features for diverse machine learning tasks. A fundamental question in multiview analysis is what is the additional information brought by additional views and can quantitatively identify this additional information. In this work, we try to tackle this challenge by decomposing the entangled multiview features into shared latent representations that are common across all views and private representations that are specific to each single view. We formulate this feature disentanglement in the framework of information bottleneck and propose disentangled variational information bottleneck (DVIB). DVIB explicitly defines the properties of shared and private representations using constrains from mutual information. By deriving variational upper and lower bounds of mutual information terms, representations are efficiently optimized. We demonstrate the shared and private representations learned by DVIB well preserve the common labels shared between two views and unique labels corresponding to each single view, respectively. DVIB also shows comparable performance in classification task on images with corruptions. DVIB implementation is available at https://github.com/feng-bao-ucsf/DVIB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Towards Consistency and Complementarity: A Multiview Graph Information Bottleneck Approach

The empirical studies of Graph Neural Networks (GNNs) broadly take the o...
research
04/29/2021

Learning Robust Variational Information Bottleneck with Reference

We propose a new approach to train a variational information bottleneck ...
research
12/09/2019

Learning Disentangled Representations via Mutual Information Estimation

In this paper, we investigate the problem of learning disentangled repre...
research
06/14/2021

Latent Correlation-Based Multiview Learning and Self-Supervision: A Unifying Perspective

Multiple views of data, both naturally acquired (e.g., image and audio) ...
research
05/20/2020

Adversarial Canonical Correlation Analysis

Canonical Correlation Analysis (CCA) is a statistical technique used to ...
research
04/01/2022

Learning Disentangled Representations of Negation and Uncertainty

Negation and uncertainty modeling are long-standing tasks in natural lan...
research
09/13/2023

Video Infringement Detection via Feature Disentanglement and Mutual Information Maximization

The self-media era provides us tremendous high quality videos. Unfortuna...

Please sign up or login with your details

Forgot password? Click here to reset