Searching for Robust Neural Architectures via Comprehensive and Reliable Evaluation

03/07/2022
by   Jialiang Sun, et al.
0

Neural architecture search (NAS) could help search for robust network architectures, where defining robustness evaluation metrics is the important procedure. However, current robustness evaluations in NAS are not sufficiently comprehensive and reliable. In particular, the common practice only considers adversarial noise and quantified metrics such as the Jacobian matrix, whereas, some studies indicated that the models are also vulnerable to other types of noises such as natural noise. In addition, existing methods taking adversarial noise as the evaluation just use the robust accuracy of the FGSM or PGD, but these adversarial attacks could not provide the adequately reliable evaluation, leading to the vulnerability of the models under stronger attacks. To alleviate the above problems, we propose a novel framework, called Auto Adversarial Attack and Defense (AAAD), where we employ neural architecture search methods, and four types of robustness evaluations are considered, including adversarial noise, natural noise, system noise and quantified metrics, thereby assisting in finding more robust architectures. Also, among the adversarial noise, we use the composite adversarial attack obtained by random search as the new metric to evaluate the robustness of the model architectures. The empirical results on the CIFAR10 dataset show that the searched efficient attack could help find more robust architectures.

READ FULL TEXT

page 5

page 8

page 9

page 10

page 11

research
05/12/2023

Efficient Search of Comprehensively Robust Neural Architectures via Multi-fidelity Evaluation

Neural architecture search (NAS) has emerged as one successful technique...
research
07/16/2020

An Empirical Study on the Robustness of NAS based Architectures

Most existing methods for Neural Architecture Search (NAS) focus on achi...
research
06/11/2023

Neural Architecture Design and Robustness: A Dataset

Deep learning models have proven to be successful in a wide range of mac...
research
06/27/2019

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

Deep neural networks were shown to misclassify slightly modified input i...
research
09/11/2021

RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

Deep neural networks (DNNs) are vulnerable to adversarial noises, which ...
research
10/21/2022

Neural Architectural Backdoors

This paper asks the intriguing question: is it possible to exploit neura...
research
12/11/2020

DSRNA: Differentiable Search of Robust Neural Architectures

In deep learning applications, the architectures of deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset