Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey

07/01/2020
by   Samuel Henrique Silva, et al.
0

As we seek to deploy machine learning models beyond virtual and controlled domains, it is critical to analyze not only the accuracy or the fact that it works most of the time, but if such a model is truly robust and reliable. This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms. We provide a taxonomy to classify adversarial attacks and defenses and formulate for the Robust Optimization problem in a min-max setting, and divide it into 3 subcategories, namely: Adversarial (re)Training, Regularization Approach, and Certified Defenses. We survey the most recent and important results in adversarial example generation and defense mechanisms with adversarial (re)Training as their main defense against perturbations or add regularization terms which change the behavior of the gradient, making it harder for attackers to achieve their objective. Alternatively, we've surveyed methods which formally derive certificates of robustness by exactly solving the optimization problem or by approximations using upper or lower bounds. In addition, we discussed the challenges faced by most of the recent algorithms presenting future research perspectives.

READ FULL TEXT

page 5

page 7

page 9

page 15

page 21

research
02/02/2021

Recent Advances in Adversarial Training for Adversarial Robustness

Adversarial training is one of the most effective approaches defending a...
research
10/23/2019

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Adversarial attacks and defenses are currently active areas of research ...
research
03/11/2023

Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey

Adversarial attacks and defenses in machine learning and deep neural net...
research
03/17/2023

It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness

Adversarial examples are inputs to machine learning models that an attac...
research
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
research
03/02/2022

Enhancing Adversarial Robustness for Deep Metric Learning

Owing to security implications of adversarial vulnerability, adversarial...
research
11/08/2021

On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods

The increasing adoption of Reinforcement Learning in safety-critical sys...

Please sign up or login with your details

Forgot password? Click here to reset