Towards Probability-based Safety Verification of Systems with Components from Machine Learning

03/02/2020
by   Hermann Kaindl, et al.
0

Machine learning (ML) has recently created many new success stories. Hence, there is a strong motivation to use ML technology in software-intensive systems, including safety-critical systems. This raises the issue of safety verification of ML-based systems, which is currently thought to be infeasible or, at least, very hard. We think that it requires taking into account specific properties of ML technology such as: (i) Most ML approaches are inductive, which is both their power and their source of failure. (ii) Neural networks (NN) resulting from deep learning are at the current state of the art not transparent. Consequently, there will always be errors remaining and, at least for deep NNs (DNNs), verification of their internal structure is extremely hard. However, also traditional safety engineering cannot provide full guarantees that no harm will ever occur. That is why probabilities are used, e.g., for specifying a Risk or a Tolerable Hazard Rate (THR). Recent theoretical work has extended the scope of formal verification to probabilistic model-checking, but this requires behavioral models. Hence, we propose verification based on probabilities of errors both estimated for by controlled experiments and output by the inductively learned classifier itself. Generalization error bounds may propagate to the probabilities of a hazard, which must not exceed a THR. As a result, the quantitatively determined bound on the probability of a classification error of an ML component in a safety-critical system contributes in a well-defined way to the latter's overall safety verification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2018

Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry

Deep Neural Networks (DNN) will emerge as a cornerstone in automotive so...
research
01/18/2018

Toward Scalable Verification for Safety-Critical Deep Networks

The increasing use of deep neural networks for safety-critical applicati...
research
12/05/2017

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

Due to the increasing usage of machine learning (ML) techniques in secur...
research
08/17/2022

Safety Assessment for Autonomous Systems' Perception Capabilities

Autonomous Systems (AS) are increasingly proposed, or used, in Safety Cr...
research
06/18/2023

GPU-Accelerated Verification of Machine Learning Models for Power Systems

Computational tools for rigorously verifying the performance of large-sc...
research
11/03/2020

Ensuring Dataset Quality for Machine Learning Certification

In this paper, we address the problem of dataset quality in the context ...
research
03/07/2020

A Safety Framework for Critical Systems Utilising Deep Neural Networks

Increasingly sophisticated mathematical modelling processes from Machine...

Please sign up or login with your details

Forgot password? Click here to reset