Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

08/19/2020
by   Alfred Laugros, et al.
4

Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that protect against a wide range of perturbations, covering the most traditional common corruptions and adversarial examples. We propose a new data augmentation strategy called M-TLAT and designed to address robustness in a broad sense. Our approach combines the Mixup augmentation and a new adversarial training algorithm called Targeted Labeling Adversarial Training (TLAT). The idea of TLAT is to interpolate the target labels of adversarial examples with the ground-truth labels. We show that M-TLAT can increase the robustness of image classifiers towards nineteen common corruptions and five adversarial attacks, without reducing the accuracy on clean samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2019

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

Adversarial training is one of the strongest defenses against adversaria...
research
02/16/2023

Masking and Mixing Adversarial Training

While convolutional neural networks (CNNs) have achieved excellent perfo...
research
05/24/2022

One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks

Unlearnable examples (ULEs) aim to protect data from unauthorized usage ...
research
03/02/2021

Adversarial Examples for Unsupervised Machine Learning Models

Adversarial examples causing evasive predictions are widely used to eval...
research
03/17/2022

On the Properties of Adversarially-Trained CNNs

Adversarial Training has proved to be an effective training paradigm to ...
research
04/11/2021

Achieving Model Robustness through Discrete Adversarial Training

Discrete adversarial attacks are symbolic perturbations to a language in...
research
09/04/2019

Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?

Neural Networks have been shown to be sensitive to common perturbations ...

Please sign up or login with your details

Forgot password? Click here to reset