Learning to falsify automated driving vehicles with prior knowledge

01/25/2021
by   Andrea Favrin, et al.
0

While automated driving technology has achieved a tremendous progress, the scalable and rigorous testing and verification of safe automated and autonomous driving vehicles remain challenging. This paper proposes a learning-based falsification framework for testing the implementation of an automated or self-driving function in simulation. We assume that the function specification is associated with a violation metric on possible scenarios. Prior knowledge is incorporated to limit the scenario parameter variance and in a model-based falsifier to guide and improve the learning process. For an exemplary adaptive cruise controller, the presented framework yields non-trivial falsifying scenarios with higher reward, compared to scenarios obtained by purely learning-based or purely model-based falsification approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2020

Fundamental Considerations around Scenario-Based Testing for Automated Driving

The homologation of automated vehicles, being safety-critical complex sy...
research
09/05/2018

Knowledge Integrated Classifier Design Based on Utility Optimization

This paper proposes a systematic framework to design a classification mo...
research
09/14/2021

Specification and Validation of Autonomous Driving Systems: A Multilevel Semantic Framework

Autonomous Driving Systems (ADS) are critical dynamic reconfigurable age...
research
12/12/2021

Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard Arbitration Reward

Discovering hazardous scenarios is crucial in testing and further improv...
research
06/07/2022

Pushing the Limits of Learning-based Traversability Analysis for Autonomous Driving on CPU

Self-driving vehicles and autonomous ground robots require a reliable an...

Please sign up or login with your details

Forgot password? Click here to reset