CEB Improves Model Robustness

02/13/2020
by   Ian Fischer, et al.
0

We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2023

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

We propose Dataset Reinforcement, a strategy to improve a dataset once s...
research
02/04/2023

Interpolation for Robust Learning: Data Augmentation on Geodesics

We propose to study and promote the robustness of a model as per its per...
research
08/12/2023

DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Neural networks are prone to learn easy solutions from superficial stati...
research
04/06/2023

Benchmarking Robustness to Text-Guided Corruptions

This study investigates the robustness of image classifiers to text-guid...
research
12/30/2021

Towards Robustness of Neural Networks

We introduce several new datasets namely ImageNet-A/O and ImageNet-R as ...
research
04/06/2023

Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets

Deep networks have achieved impressive results on a range of well-curate...
research
07/17/2019

Robustness properties of Facebook's ResNeXt WSL models

We investigate the robustness properties of ResNeXt image recognition mo...

Please sign up or login with your details

Forgot password? Click here to reset