Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks

01/16/2021
by   Jia Liu, et al.
10

Many existing deep learning models are vulnerable to adversarial examples that are imperceptible to humans. To address this issue, various methods have been proposed to design network architectures that are robust to one particular type of adversarial attacks. It is practically impossible, however, to predict beforehand which type of attacks a machine learn model may suffer from. To address this challenge, we propose to search for deep neural architectures that are robust to five types of well-known adversarial attacks using a multi-objective evolutionary algorithm. To reduce the computational cost, a normalized error rate of a randomly chosen attack is calculated as the robustness for each newly generated neural architecture at each generation. All non-dominated network architectures obtained by the proposed method are then fully trained against randomly chosen adversarial attacks and tested on two widely used datasets. Our experimental results demonstrate the superiority of optimized neural architectures found by the proposed approach over state-of-the-art networks that are widely used in the literature in terms of the classification accuracy under different adversarial attacks.

READ FULL TEXT

page 1

page 4

page 6

page 8

page 9

page 10

page 11

page 12

research
11/19/2020

Effective, Efficient and Robust Neural Architecture Search

Recent advances in adversarial attacks show the vulnerability of deep ne...
research
08/15/2022

A Multi-objective Memetic Algorithm for Auto Adversarial Attack Optimization Design

The phenomenon of adversarial examples has been revealed in variant scen...
research
12/28/2022

Differentiable Search of Accurate and Robust Architectures

Deep neural networks (DNNs) are found to be vulnerable to adversarial at...
research
04/17/2023

RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks

It is well-known that recurrent neural networks (RNNs), although widely ...
research
12/28/2022

Robust Ranking Explanations

Gradient-based explanation is the cornerstone of explainable deep networ...
research
11/25/2019

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

Recent advances in adversarial attacks uncover the intrinsic vulnerabili...
research
06/20/2022

Diversified Adversarial Attacks based on Conjugate Gradient Method

Deep learning models are vulnerable to adversarial examples, and adversa...

Please sign up or login with your details

Forgot password? Click here to reset