DSRNA: Differentiable Search of Robust Neural Architectures

12/11/2020
by   Ramtin Hosseini, et al.
0

In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy. Many methods have been proposed to search for high-performance neural architectures automatically. However, these searched architectures are prone to adversarial attacks. A small perturbation of the input data can render the architecture to change prediction outcomes significantly. To address this problem, we propose methods to perform differentiable search of robust neural architectures. In our methods, two differentiable metrics are defined to measure architectures' robustness, based on certified lower bound and Jacobian norm bound. Then we search for robust architectures by maximizing the robustness metrics. Different from previous approaches which aim to improve architectures' robustness in an implicit way: performing adversarial training and injecting random noise, our methods explicitly and directly maximize robustness metrics to harvest robust architectures. On CIFAR-10, ImageNet, and MNIST, we perform game-based evaluation and verification-based evaluation on the robustness of our methods. The experimental results show that our methods 1) are more robust to various norm-bound attacks than several robust NAS baselines; 2) are more accurate than baselines when there are no attacks; 3) have significantly higher certified lower bounds than baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/07/2022

Searching for Robust Neural Architectures via Comprehensive and Reliable Evaluation

Neural architecture search (NAS) could help search for robust network ar...
11/25/2019

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

Recent advances in adversarial attacks uncover the intrinsic vulnerabili...
12/02/2019

AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning

We propose a method that enables practitioners to conveniently incorpora...
01/16/2021

Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks

Many existing deep learning models are vulnerable to adversarial example...
06/01/2020

Second-Order Provable Defenses against Adversarial Attacks

A robustness certificate is the minimum distance of a given input to the...
07/12/2022

Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures

Deep neural networks have been found vulnerable to adversarial attacks, ...
12/10/2019

On Certifying Robust Models by Polyhedral Envelope

Certifying neural networks enables one to offer guarantees on a model's ...