Strategy to Increase the Safety of a DNN-based Perception for HAD Systems

02/20/2020
by   Timo Sämann, et al.
0

Safety is one of the most important development goals for highly automated driving (HAD) systems. This applies in particular to the perception function driven by Deep Neural Networks (DNNs). For these, large parts of the traditional safety processes and requirements are not fully applicable or sufficient. The aim of this paper is to present a framework for the description and mitigation of DNN insufficiencies and the derivation of relevant safety mechanisms to increase the safety of DNNs. To assess the effectiveness of these safety mechanisms, we present a categorization scheme for evaluation metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2023

Deep Learning Safety Concerns in Automated Driving Perception

Recent advances in the field of deep learning and impressive performance...
research
02/06/2023

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

Deep neural networks (DNNs) are increasingly used in safety-critical aut...
research
10/12/2020

Continuous Safety Verification of Neural Networks

Deploying deep neural networks (DNNs) as core functions in autonomous dr...
research
12/14/2022

Backdoor Mitigation in Deep Neural Networks via Strategic Retraining

Deep Neural Networks (DNN) are becoming increasingly more important in a...
research
09/19/2019

The Colliding Reciprocal Dance Problem: A Mitigation Strategy with Application to Automotive Active Safety Systems

A reciprocal dance occurs when two mobile agents attempt to pass each ot...
research
06/06/2020

Guarded Deep Learning using Scenario-Based Modeling

Deep neural networks (DNNs) are becoming prevalent, often outperforming ...
research
05/27/2023

Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

Providing safety guarantees for autonomous systems is difficult as these...

Please sign up or login with your details

Forgot password? Click here to reset