ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks

07/17/2019
by   Xuankang Lin, et al.
0

Artificial Neural Networks (ANNs) have demonstrated remarkable utility in various challenging machine learning applications. While formally verified properties of their behaviors are highly desired, they have proven notoriously difficult to derive and enforce. Existing approaches typically formulate this problem as a post facto analysis process. In this paper, we present a novel learning framework that ensures such formal guarantees are enforced by construction. Our technique enables training provably correct networks with respect to a broad class of safety properties, a capability that goes well-beyond existing approaches, without compromising much accuracy. Our key insight is that we can integrate an optimization-based abstraction refinement loop into the learning process and operate over dynamically constructed partitions of the input space that considers accuracy and safety objectives synergistically. The refinement procedure iteratively splits the input space from which training data is drawn, guided by the efficacy with which such partitions enable safety verification. We have implemented our approach in a tool (ART) and applied it to enforce general safety properties on unmanned aviator collision avoidance system ACAS Xu dataset and the Collision Detection dataset. Importantly, we empirically demonstrate that realizing safety does not come at the price of much accuracy. Our methodology demonstrates that an abstraction refinement methodology provides a meaningful pathway for building both accurate and correct machine learning networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2022

Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks

As a new programming paradigm, deep neural networks (DNNs) have been inc...
research
10/15/2020

Improving Neural Network Verification through Spurious Region Guided Refinement

We propose a spurious region guided refinement approach for robustness v...
research
03/01/2021

Generating Probabilistic Safety Guarantees for Neural Network Controllers

Neural networks serve as effective controllers in a variety of complex s...
research
12/05/2017

Partial Predicate Abstraction and Counter-Example Guided Refinement

In this paper we present a counter-example guided abstraction and approx...
research
09/19/2018

Efficient Formal Safety Analysis of Neural Networks

Neural networks are increasingly deployed in real-world safety-critical ...
research
07/16/2019

An Inductive Synthesis Framework for Verifiable Reinforcement Learning

Despite the tremendous advances that have been made in the last decade o...
research
08/16/2019

How to Win First-Order Safety Games

First-order (FO) transition systems have recently attracted attention fo...

Please sign up or login with your details

Forgot password? Click here to reset