Deep Deterministic Information Bottleneck with Matrix-based Entropy Functional

01/31/2021
by   Xi Yu, et al.
0

We introduce the matrix-based Renyi's α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.Code available at https://github.com/yuxi120407/DIB

READ FULL TEXT
research
04/22/2022

Multi-view Information Bottleneck Without Variational Approximation

By "intelligently" fusing the complementary information across different...
research
06/06/2019

Class-Conditional Compression and Disentanglement: Bridging the Gap between Neural Networks and Naive Bayes Classifiers

In this draft, which reports on work in progress, we 1) adapt the inform...
research
10/12/2021

Gated Information Bottleneck for Generalization in Sequential Environments

Deep neural networks suffer from poor generalization to unseen environme...
research
10/01/2019

Entropy Penalty: Towards Generalization Beyond the IID Assumption

It has been shown that instead of learning actual object features, deep ...
research
02/07/2022

Deep Deterministic Independent Component Analysis for Hyperspectral Unmixing

We develop a new neural network based independent component analysis (IC...
research
05/30/2023

How Does Information Bottleneck Help Deep Learning?

Numerous deep learning algorithms have been inspired by and understood v...
research
09/26/2019

Stochastic Weight Matrix-based Regularization Methods for Deep Neural Networks

The aim of this paper is to introduce two widely applicable regularizati...

Please sign up or login with your details

Forgot password? Click here to reset