Safety Verification of Deep Neural Networks

10/21/2016
by   Xiaowei Huang, et al.
0

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.

READ FULL TEXT

page 19

page 20

page 27

page 28

page 29

research
02/01/2019

Adversarial Example Generation

Deep Neural Networks have achieved remarkable success in computer vision...
research
10/21/2017

Feature-Guided Black-Box Safety Testing of Deep Neural Networks

Despite the improved accuracy of deep neural networks, the discovery of ...
research
03/28/2017

Adversarial Transformation Networks: Learning to Generate Adversarial Examples

Multiple different approaches of generating adversarial examples have be...
research
11/09/2017

Crafting Adversarial Examples For Speech Paralinguistics Applications

Computational paralinguistic analysis is increasingly being used in a wi...
research
10/07/2020

Global Optimization of Objective Functions Represented by ReLU Networks

Neural networks (NN) learn complex non-convex functions, making them des...
research
06/08/2020

Global Robustness Verification Networks

The wide deployment of deep neural networks, though achieving great succ...
research
10/30/2022

FI-ODE: Certified and Robust Forward Invariance in Neural ODEs

We study how to certifiably enforce forward invariance properties in neu...

Please sign up or login with your details

Forgot password? Click here to reset