Verification of Non-Linear Specifications for Neural Networks

02/25/2019
by   Chongli Qin, et al.
14

Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications.

READ FULL TEXT

page 10

page 18

research
02/23/2022

Learning Neural Networks under Input-Output Specifications

In this paper, we examine an important problem of learning neural networ...
research
02/18/2021

Verifying Probabilistic Specifications with Functional Lagrangians

We propose a general framework for verifying input-output specifications...
research
09/01/2021

Shared Certificates for Neural Network Verification

Existing neural network verifiers compute a proof that each input is han...
research
12/01/2016

An Evaluation of Models for Runtime Approximation in Link Discovery

Time-efficient link discovery is of central importance to implement the ...
research
09/25/2021

Auditing AI models for Verified Deployment under Semantic Specifications

Auditing trained deep learning (DL) models prior to deployment is vital ...
research
10/30/2022

FI-ODE: Certified and Robust Forward Invariance in Neural ODEs

We study how to certifiably enforce forward invariance properties in neu...
research
01/14/2023

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

This paper presents a summary and meta-analysis of the first three itera...

Please sign up or login with your details

Forgot password? Click here to reset