Evolving Robust Neural Architectures to Defend from Adversarial Attacks

Deep neural networks were shown to misclassify slightly modified input images. Recently, many defenses have been proposed but none have improved consistently the robustness of neural networks. Here, we propose to use attacks as a function evaluation to automatically search for architectures that can resist such attacks. Experiments on neural architecture search algorithms from the literature show that although their accurate results, they are not able to find robust architectures. Most of the reason for this lies in their limited search space. By creating a novel neural architecture search with options for dense layers to connect with convolution layers and vice-versa as well as the addition of multiplication, addition and concatenation layers in the search space, we were able to evolve an architecture that is 58% accurate on adversarial samples. Interestingly, this inherent robustness of the evolved architecture rivals state-of-the-art defenses such as adversarial training while being trained only on the training dataset. Moreover, the evolved architecture makes use of some peculiar traits which might be useful for developing even more robust ones. Thus, the results here demonstrate that more robust architectures exist as well as opens up a new range of possibilities for the development and exploration of deep neural networks using automatic architecture search. Code available at http://bit.ly/RobustArchitectureSearch.

READ FULL TEXT

page 13

page 14

research
04/06/2023

Robust Neural Architecture Search

Neural Architectures Search (NAS) becomes more and more popular over the...
research
12/28/2022

Differentiable Search of Accurate and Robust Architectures

Deep neural networks (DNNs) are found to be vulnerable to adversarial at...
research
03/07/2022

Searching for Robust Neural Architectures via Comprehensive and Reliable Evaluation

Neural architecture search (NAS) could help search for robust network ar...
research
03/23/2021

Neural Architecture Search From Fréchet Task Distance

We formulate a Fréchet-type asymmetric distance between tasks based on F...
research
06/15/2019

Uncovering Why Deep Neural Networks Lack Robustness: Representation Metrics that Link to Adversarial Attacks

Neural networks have been shown vulnerable to adversarial samples. Sligh...
research
06/30/2021

Exploring Robustness of Neural Networks through Graph Measures

Motivated by graph theory, artificial neural networks (ANNs) are traditi...
research
06/13/2021

ATRAS: Adversarially Trained Robust Architecture Search

In this paper, we explore the effect of architecture completeness on adv...

Please sign up or login with your details

Forgot password? Click here to reset