Shared Certificates for Neural Network Verification

09/01/2021
by   Christian Sprecher, et al.
0

Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a convex set of reachable values at each layer. This process is repeated independently for each input (e.g., image) and perturbation (e.g., rotation), leading to an expensive overall proof effort when handling an entire dataset. In this work we introduce a new method for reducing this verification cost based on the key insight that convex sets obtained at intermediate layers can overlap across different inputs and perturbations. Leveraging this insight, we introduce the general concept of shared certificates, enabling proof effort reuse across multiple inputs and driving down overall verification costs. We validate our insight via an extensive experimental evaluation and demonstrate the effectiveness of shared certificates on a range of datasets and attack specifications including geometric, patch and ℓ_∞ input perturbations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/19/2019

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

Verifying robustness of neural networks given a specified threat model i...
02/25/2019

Verification of Non-Linear Specifications for Neural Networks

Prior work on neural network verification has focused on specifications ...
10/18/2019

Understanding Deep Networks via Extremal Perturbations and Smooth Masks

The problem of attribution is concerned with identifying the parts of an...
10/21/2021

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...
07/20/2020

Neural Network Robustness Verification on GPUs

Certifying the robustness of neural networks against adversarial attacks...
04/11/2019

Reconstructing Network Inputs with Additive Perturbation Signatures

In this work, we present preliminary results demonstrating the ability t...
02/15/2021

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.