Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective

12/21/2022
by   Shihua Huang, et al.
0

Efforts to improve the adversarial robustness of convolutional neural networks have primarily focused on developing more effective adversarial training methods. In contrast, little attention was devoted to analyzing the role of architectural elements (such as topology, depth, and width) on adversarial robustness. This paper seeks to bridge this gap and present a holistic study on the impact of architectural design on adversarial robustness. We focus on residual networks and consider architecture design at the block level, i.e., topology, kernel size, activation, and normalization, as well as at the network scaling level, i.e., depth and width of each block in the network. In both cases, we first derive insights through systematic ablative experiments. Then we design a robust residual block, dubbed RobustResBlock, and a compound scaling rule, dubbed RobustScaling, to distribute depth and width at the desired FLOP count. Finally, we combine RobustResBlock and RobustScaling and present a portfolio of adversarially robust residual networks, RobustResNets, spanning a broad spectrum of model capacities. Experimental validation across multiple datasets and adversarial attacks demonstrate that RobustResNets consistently outperform both the standard WRNs and other existing robust architectures, achieving state-of-the-art AutoAttack robust accuracy of 61.1 2× more compact in terms of parameters. Code is available at < https://github.com/zhichao-lu/robust-residual-network>

READ FULL TEXT

page 4

page 7

page 8

page 16

research
08/30/2023

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

Our research aims to unify existing works' diverging opinions on how arc...
research
10/07/2021

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

Deep neural networks (DNNs) are known to be vulnerable to adversarial at...
research
11/30/2020

SplitNet: Divide and Co-training

The width of a neural network matters since increasing the width will ne...
research
01/08/2023

RobArch: Designing Robust Architectures against Adversarial Attacks

Adversarial Training is the most effective approach for improving the ro...
research
04/15/2021

AsymmNet: Towards ultralight convolution neural networks using asymmetrical bottlenecks

Deep convolutional neural networks (CNN) have achieved astonishing resul...
research
09/28/2022

Exploring the Relationship between Architecture and Adversarially Robust Generalization

Adversarial training has been demonstrated to be one of the most effecti...
research
02/24/2022

Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling

Recent works reveal that re-calibrating the intermediate activation of a...

Please sign up or login with your details

Forgot password? Click here to reset