Guess First to Enable Better Compression and Adversarial Robustness

01/10/2020
by   Sicheng Zhu, et al.
0

Machine learning models are generally vulnerable to adversarial examples, which is in contrast to the robustness of humans. In this paper, we try to leverage one of the mechanisms in human recognition and propose a bio-inspired classification framework in which model inference is conditioned on label hypothesis. We provide a class of training objectives for this framework and an information bottleneck regularizer which utilizes the advantage that label information can be discarded during inference. This framework enables better compression of the mutual information between inputs and latent representations without loss of learning capacity, at the cost of tractable inference complexity. Better compression and elimination of label information further bring better adversarial robustness without loss of natural accuracy, which is demonstrated in the experiment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2016

Are Accuracy and Robustness Correlated?

Machine learning models are vulnerable to adversarial examples formed by...
research
06/04/2021

Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottlen...
research
03/02/2021

Adversarial Examples for Unsupervised Machine Learning Models

Adversarial examples causing evasive predictions are widely used to eval...
research
09/22/2018

Unrestricted Adversarial Examples

We introduce a two-player contest for evaluating the safety and robustne...
research
07/28/2020

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

We consider the theoretical problem of designing an optimal adversarial ...
research
07/25/2022

Improving Adversarial Robustness via Mutual Information Estimation

Deep neural networks (DNNs) are found to be vulnerable to adversarial no...
research
02/13/2020

The Conditional Entropy Bottleneck

Much of the field of Machine Learning exhibits a prominent set of failur...

Please sign up or login with your details

Forgot password? Click here to reset