Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

04/17/2018
by   Aleksander Wieczorek, et al.
0

Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset