Automatic Perturbation Analysis on General Computational Graphs

02/28/2020
by   Kaidi Xu, et al.
11

Linear relaxation based perturbation analysis for neural networks, which aims to compute tight linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense. However, the majority of linear relaxation based methods only consider feed-forward ReLU networks. While several works extended them to relatively complicated networks, they often need tedious manual derivations and implementation which are arduous and error-prone. Their limited flexibility makes it difficult to handle more complicated tasks. In this paper, we take a significant leap by developing an automatic perturbation analysis algorithm to enable perturbation analysis on any neural network structure, and its computation can be done automatically in a similar manner as the back-propagation algorithm for gradient computation. The main idea is to express a network as a computational graph and then generalize linear relaxation algorithms such as CROWN as a graph algorithm. Our algorithm itself is differentiable and integrated with PyTorch, which allows to optimize network parameters to reshape bounds into desired specifications, enabling automatic robustness verification and certified defense. In particular, we demonstrate a few tasks that are not easily achievable without an automatic framework. We first perform certified robust training and robustness verification for complex natural language models which could be challenging with manual derivation and implementation. We further show that our algorithm can be used for tasks beyond certified defense - we create a neural network with a provably flat optimization landscape and study its generalization capability, and we show that this network can preserve accuracy better after aggressive weight quantization. Code is available at https://github.com/KaidiXu/auto_LiRPA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models

Backdoor attacks, which maliciously control a well-trained model's outpu...
research
10/28/2018

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications

The Jacobian matrix (or the gradient for single-output networks) is dire...
research
06/14/2019

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

Training neural networks with verifiable robustness guarantees is challe...
research
06/24/2020

The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

We improve the effectiveness of propagation- and linear-optimization-bas...
research
04/01/2021

Towards Evaluating and Training Verifiably Robust Neural Networks

Recent works have shown that interval bound propagation (IBP) can be use...
research
08/20/2020

On ℓ_p-norm Robustness of Ensemble Stumps and Trees

Recent papers have demonstrated that ensemble stumps and trees could be ...
research
05/05/2023

On Preimage Approximation for Neural Networks

Neural network verification mainly focuses on local robustness propertie...

Please sign up or login with your details

Forgot password? Click here to reset