The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

06/24/2020
by   Christian Tjandraatmadja, et al.
5

We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons. Unlike previous single-neuron relaxations which focus only on the univariate input space of the ReLU, our method considers the multivariate input space of the affine pre-activation function preceding the ReLU. Using results from submodularity and convex geometry, we derive an explicit description of the tightest possible convex relaxation when this multivariate input is over a box domain. We show that our convex relaxation is significantly stronger than the commonly used univariate-input relaxation which has been proposed as a natural convex relaxation barrier for verification. While our description of the relaxation may require an exponential number of inequalities, we show that they can be separated in linear time and hence can be efficiently incorporated into optimization algorithms on an as-needed basis. Based on this novel relaxation, we design two polynomial-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches. In both cases, we show that for a modest increase in computational effort, our strengthened relaxation enables us to verify a significantly larger number of instances compared to similar algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2022

Overcoming the Convex Relaxation Barrier for Neural Network Verification via Nonconvex Low-Rank Semidefinite Relaxations

To rigorously certify the robustness of neural networks to adversarial p...
research
03/06/2022

A Unified View of SDP-based Neural Network Verification through Completely Positive Programming

Verifying that input-output relationships of a neural network conform to...
research
06/06/2021

A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification

The existence of adversarial examples poses a real danger when deep neur...
research
01/14/2021

Scaling the Convex Barrier with Active Sets

Tight and efficient neural network bounding is of critical importance fo...
research
04/30/2022

Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound

State-of-the-art neural network verifiers are fundamentally based on one...
research
02/23/2019

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Verification of neural networks enables us to gauge their robustness aga...
research
02/28/2020

Automatic Perturbation Analysis on General Computational Graphs

Linear relaxation based perturbation analysis for neural networks, which...

Please sign up or login with your details

Forgot password? Click here to reset