Adversarial Robustness Assessment of NeuroEvolution Approaches

07/12/2022
by   Inês Valentim, et al.
0

NeuroEvolution automates the generation of Artificial Neural Networks through the application of techniques from Evolutionary Computation. The main goal of these approaches is to build models that maximize predictive performance, sometimes with an additional objective of minimizing computational complexity. Although the evolved models achieve competitive results performance-wise, their robustness to adversarial examples, which becomes a concern in security-critical scenarios, has received limited attention. In this paper, we evaluate the adversarial robustness of models found by two prominent NeuroEvolution approaches on the CIFAR-10 image classification task: DENSER and NSGA-Net. Since the models are publicly available, we consider white-box untargeted attacks, where the perturbations are bounded by either the L2 or the Linfinity-norm. Similarly to manually-designed networks, our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero under both distance metrics. The DENSER model is an exception to this trend, showing some resistance under the L2 threat model, where its accuracy only drops from 93.70 iterative attacks. Additionally, we analyzed the impact of pre-processing applied to the data before the first layer of the network. Our observations suggest that some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness. Thus, this choice should not be neglected when automatically designing networks for applications where adversarial attacks are prone to occur.

READ FULL TEXT
research
02/15/2021

Generating Structured Adversarial Attacks Using Frank-Wolfe Method

White box adversarial perturbations are generated via iterative optimiza...
research
09/02/2021

Impact of Attention on Adversarial Robustness of Image Classification Models

Adversarial attacks against deep learning models have gained significant...
research
08/31/2021

EG-Booster: Explanation-Guided Booster of ML Evasion Attacks

The widespread usage of machine learning (ML) in a myriad of domains has...
research
12/10/2020

Robustness and Transferability of Universal Attacks on Compressed Models

Neural network compression methods like pruning and quantization are ver...
research
04/22/2020

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...
research
02/01/2019

The Efficacy of SHIELD under Different Threat Models

We study the efficacy of SHIELD in the face of alternative threat models...
research
09/12/2023

Adversarial Attacks Assessment of Salient Object Detection via Symbolic Learning

Machine learning is at the center of mainstream technology and outperfor...

Please sign up or login with your details

Forgot password? Click here to reset