Distributed Variational Representation Learning

07/11/2018
by   Inaki Estella Aguerri, et al.
0

The problem of distributed representation learning is one in which multiple sources of information X_1,...,X_K are processed separately so as to learn as much information as possible about some ground truth Y. We investigate this problem from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting. Specifically, K encoders, K ≥ 2, compress their observations X_1,...,X_K separately in a manner such that, collectively, the produced representations preserve as much information as possible about Y. We study both discrete memoryless (DM) and memoryless vector Gaussian data models. For the discrete model, we establish a single-letter characterization of the optimal tradeoff between complexity (or rate) and relevance (or information) for a class of memoryless sources (the observations X_1,...,X_K being conditionally independent given Y). For the vector Gaussian model, we provide an explicit characterization of the optimal complexity-relevance tradeoff. Furthermore, we develop a variational bound on the complexity-relevance tradeoff which generalizes the evidence lower bound (ELBO) to the distributed setting. We also provide two algorithms that allow to compute this bound: i) a Blahut-Arimoto type iterative algorithm which enables to compute optimal complexity-relevance encoding mappings by iterating over a set of self-consistent equations, and ii) a variational inference type algorithm in which the encoding mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms developed in this paper.

READ FULL TEXT
research
05/28/2019

Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding

In this paper, we develop an unsupervised generative clustering framewor...
research
01/31/2020

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views

This tutorial paper focuses on the variants of the bottleneck problem ta...
research
02/15/2021

Scalable Vector Gaussian Information Bottleneck

In the context of statistical learning, the Information Bottleneck metho...
research
11/02/2020

On the Relevance-Complexity Region of Scalable Information Bottleneck

The Information Bottleneck method is a learning technique that seeks a r...
research
04/05/2016

Collaborative Information Bottleneck

This paper investigates a multi-terminal source coding problem under a l...
research
11/12/2019

Learning Representations in Reinforcement Learning:An Information Bottleneck Approach

The information bottleneck principle is an elegant and useful approach t...
research
06/20/2022

Variational Distillation for Multi-View Learning

Information Bottleneck (IB) based multi-view learning provides an inform...

Please sign up or login with your details

Forgot password? Click here to reset