Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

04/17/2018
by   Aleksander Wieczorek, et al.
0

Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2021

Learning Conditional Invariance through Cycle Consistency

Identifying meaningful and independent factors of variation in a dataset...
research
03/04/2022

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

The information bottleneck framework provides a systematic approach to l...
research
09/30/2019

Imagine That! Leveraging Emergent Affordances for Tool Synthesis in Reaching Tasks

In this paper we investigate an artificial agent's ability to perform ta...
research
06/18/2020

Sparse Bottleneck Networks for Exploratory Analysis and Visualization of Neural Patch-seq Data

In recent years, increasingly large datasets with two different sets of ...
research
02/03/2023

SPARLING: Learning Latent Representations with Extremely Sparse Activations

Real-world processes often contain intermediate state that can be modele...
research
09/30/2022

Relative representations enable zero-shot latent space communication

Neural networks embed the geometric structure of a data manifold lying i...
research
02/26/2020

NestedVAE: Isolating Common Factors via Weak Supervision

Fair and unbiased machine learning is an important and active field of r...

Please sign up or login with your details

Forgot password? Click here to reset