DeepAI
Log In Sign Up

Optimizing Information Loss Towards Robust Neural Networks

08/07/2020
by   Philip Sperl, et al.
0

Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The required perturbations to craft the examples are often negligible and even human imperceptible. To protect deep learning based system from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computational expensive and time consuming process often leading to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call entropic retraining. Based on an information-theoretic analysis, entropic retraining mimics the effects of adversarial training without the need of the laborious generation of adversarial examples. We empirically show that entropic retraining leads to a significant increase in NNs' security and robustness while only relying on the given original data. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/27/2019

Efficient Adversarial Training with Transferable Adversarial Examples

Adversarial training is an effective defense method to protect classific...
05/27/2022

R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training

Hardware Trojans (HTs) have become a serious problem, and extermination ...
08/19/2020

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Despite their performance, Artificial Neural Networks are not reliable e...
08/20/2021

ASAT: Adaptively Scaled Adversarial Training in Time Series

Adversarial training is a method for enhancing neural networks to improv...
08/12/2020

Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise

Sensitivity to adversarial noise hinders deployment of machine learning ...
03/04/2021

Gradient-Guided Dynamic Efficient Adversarial Training

Adversarial training is arguably an effective but time-consuming way to ...
07/01/2022

Efficient Adversarial Training With Data Pruning

Neural networks are susceptible to adversarial examples-small input pert...