CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators

11/25/2019
by   Julien Girard-Satabin, et al.
4

The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples. However, other works have proved global properties of smaller neural networks. Yet, formally verifying perception remains uncharted. This is due notably to the lack of relevant properties to verify, as the distribution of possible inputs cannot be formally specified. We propose to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry. Our formulation allows us to formally express and verify safety properties on perception units, covering all cases that could ever be generated by the simulator, to the difference of statistical tests which cover only seen examples. Along with this theoretical formulation , we provide a tool to translate deep learning models into standard logical formulae. As a proof of concept, we train a toy example mimicking an autonomous car perceptive unit, and we formally verify that it will never fail to capture the relevant information in the provided inputs.

READ FULL TEXT
research
03/25/2023

Verifying Properties of Tsetlin Machines

Tsetlin Machines (TsMs) are a promising and interpretable machine learni...
research
06/13/2022

Specifying and Testing k-Safety Properties for Machine-Learning Models

Machine-learning models are becoming increasingly prevalent in our lives...
research
12/16/2020

Generate and Verify: Semantically Meaningful Formal Analysis of Neural Network Perception Systems

Testing remains the primary method to evaluate the accuracy of neural ne...
research
05/06/2021

Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles

To guarantee that machine learning models yield outputs that are not onl...
research
10/29/2019

Modelling and testing timed data-flow reactive systems in Coq from controlled natural-language requirements

Data-flow reactive systems (DFRSs) are a class of embedded systems whose...
research
09/05/2022

Exploring the Verifiability of Code Generated by GitHub Copilot

GitHub's Copilot generates code quickly. We investigate whether it gener...
research
08/25/2021

Toward Formal Data Set Verification for Building Effective Machine Learning Models

In order to properly train a machine learning model, data must be proper...

Please sign up or login with your details

Forgot password? Click here to reset