Extracting robust and accurate features via a robust information bottleneck

10/15/2019
by   Ankit Pensia, et al.
0

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space. Our method builds upon the idea of the information bottleneck by introducing an additional penalty term that encourages the Fisher information of the extracted features to be small, when parametrized by the inputs. By tuning the regularization parameter, we can explicitly trade off the opposing desiderata of robustness and accuracy when constructing a classifier. We derive the optimal solution to the robust information bottleneck when the inputs and outputs are jointly Gaussian, proving that the optimally robust features are also jointly Gaussian in that setting. Furthermore, we propose a method for optimizing a variational bound on the robust information bottleneck objective in general settings using stochastic gradient descent, which may be implemented efficiently in neural networks. Our experimental results for synthetic and real data sets show that the proposed feature extraction method indeed produces classifiers with increased robustness to perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2017

Nonlinear Information Bottleneck

Information bottleneck [IB] is a technique for extracting information in...
research
05/01/2020

An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction

Decisions of complex language understanding models can be rationalized b...
research
05/28/2019

Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding

In this paper, we develop an unsupervised generative clustering framewor...
research
09/27/2020

Learning Optimal Representations with the Decodable Information Bottleneck

We address the question of characterizing and finding optimal representa...
research
03/23/2021

Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust Exploration

We propose a novel information bottleneck (IB) method named Drop-Bottlen...
research
05/15/2021

Drill the Cork of Information Bottleneck by Inputting the Most Important Data

Deep learning has become the most powerful machine learning tool in the ...
research
06/11/2022

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

Existing studies have demonstrated that adversarial examples can be dire...

Please sign up or login with your details

Forgot password? Click here to reset