Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

05/27/2023
by   Corina Pasareanu, et al.
0

Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size (they can have thousands or millions of parameters), lack of formal specifications (DNNs are typically learnt from labeled data, in the absence of any formal requirements), and sensitivity to small changes in the environment. We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties of such autonomous systems. Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties. The synthesized assumptions are the weakest in the sense that they characterize the output sequences of all the possible DNNs that, plugged into the autonomous system, guarantee the required safety properties. The assumptions can be leveraged as run-time monitors over a deployed DNN to guarantee the safety of the overall system; they can also be mined to extract local specifications for use during training and testing of DNNs. We illustrate our approach on a case study taken from the autonomous airplanes domain that uses a complex DNN for perception.

READ FULL TEXT
research
02/06/2023

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

Deep neural networks (DNNs) are increasingly used in safety-critical aut...
research
10/18/2018

Compositional Verification for Autonomous Systems with Deep Learning Components

As autonomy becomes prevalent in many applications, ranging from recomme...
research
03/09/2020

Finding Input Characterizations for Output Properties in ReLU Neural Networks

Deep Neural Networks (DNNs) have emerged as a powerful mechanism and are...
research
02/20/2020

Strategy to Increase the Safety of a DNN-based Perception for HAD Systems

Safety is one of the most important development goals for highly automat...
research
09/18/2019

Using Quantifier Elimination to Enhance the Safety Assurance of Deep Neural Networks

Advances in the field of Machine Learning and Deep Neural Networks (DNNs...
research
10/12/2020

Continuous Safety Verification of Neural Networks

Deploying deep neural networks (DNNs) as core functions in autonomous dr...
research
01/05/2021

Run-Time Monitoring of Machine Learning for Robotic Perception: A Survey of Emerging Trends

As deep learning continues to dominate all state-of-the-art computer vis...

Please sign up or login with your details

Forgot password? Click here to reset