Shared Certificates for Neural Network Verification

09/01/2021
by   Christian Sprecher, et al.
0

Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a convex set of reachable values at each layer. This process is repeated independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work we introduce a new method for reducing this verification cost based on the key insight that convex sets obtained at intermediate layers can overlap across different inputs and perturbations. Leveraging this insight, we introduce the general concept of shared certificates, enabling proof effort reuse across multiple inputs and driving down overall verification costs. We validate our insight via an extensive experimental evaluation and demonstrate the effectiveness of shared certificates on a range of datasets and attack specifications including geometric, patch and ℓ_∞ input perturbations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2019

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

Verifying robustness of neural networks given a specified threat model i...
research
02/25/2019

Verification of Non-Linear Specifications for Neural Networks

Prior work on neural network verification has focused on specifications ...
research
02/02/2023

Provably Bounding Neural Network Preimages

Most work on the formal verification of neural networks has focused on b...
research
10/18/2019

Understanding Deep Networks via Extremal Perturbations and Smooth Masks

The problem of attribution is concerned with identifying the parts of an...
research
06/21/2023

Verifying Global Neural Network Specifications using Hyperproperties

Current approaches to neural network verification focus on specification...
research
03/02/2021

Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation

This paper adds to the fundamental body of work on benchmarking the robu...
research
04/11/2019

Reconstructing Network Inputs with Additive Perturbation Signatures

In this work, we present preliminary results demonstrating the ability t...

Please sign up or login with your details

Forgot password? Click here to reset