Neural Network Repair with Reachability Analysis

08/09/2021
by   Xiaodong Yang, et al.
0

Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled systems under attacks, model uncertainties, and sensing errors is essential for safe autonomy. This research proposes a framework to repair unsafe DNNs in safety-critical systems with reachability analysis. The repair process is inspired by adversarial training which has demonstrated high effectiveness in improving the safety and robustness of DNNs. Different from traditional adversarial training approaches where adversarial examples are utilized from random attacks and may not be representative of all unsafe behaviors, our repair process uses reachability analysis to compute the exact unsafe regions and identify sufficiently representative examples to enhance the efficacy and efficiency of the adversarial training. The performance of our framework is evaluated on two types of benchmarks without safe models as references. One is a DNN controller for aircraft collision avoidance with access to training data. The other is a rocket lander where our framework can be seamlessly integrated with the well-known deep deterministic policy gradient (DDPG) reinforcement learning algorithm. The experimental results show that our framework can successfully repair all instances on multiple safety specifications with negligible performance degradation. In addition, to increase the computational and memory efficiency of the reachability analysis algorithm, we propose a depth-first-search algorithm that combines an existing exact analysis method with an over-approximation approach based on a new set representation. Experimental results show that our method achieves a five-fold improvement in runtime and a two-fold improvement in memory usage compared to exact analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2021

Dual Head Adversarial Training

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
04/09/2021

Provable Repair of Deep Neural Networks

Deep Neural Networks (DNNs) have grown in popularity over the past decad...
research
07/17/2022

Automated Repair of Neural Networks

Over the last decade, Neural Networks (NNs) have been widely used in num...
research
04/03/2023

Model-Agnostic Reachability Analysis on Deep Neural Networks

Verification plays an essential role in the formal analysis of safety-cr...
research
08/17/2020

Runtime-Safety-Guided Policy Repair

We study the problem of policy repair for learning-based control policie...
research
09/13/2020

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...
research
02/23/2018

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

Recent successes in reinforcement learning have lead to the development ...

Please sign up or login with your details

Forgot password? Click here to reset