The Reasonable Crowd: Towards evidence-based and interpretable models of driving behavior

07/28/2021
by   Bassam Helou, et al.
6

Autonomous vehicles must balance a complex set of objectives. There is no consensus on how they should do so, nor on a model for specifying a desired driving behavior. We created a dataset to help address some of these questions in a limited operating domain. The data consists of 92 traffic scenarios, with multiple ways of traversing each scenario. Multiple annotators expressed their preference between pairs of scenario traversals. We used the data to compare an instance of a rulebook, carefully hand-crafted independently of the dataset, with several interpretable machine learning models such as Bayesian networks, decision trees, and logistic regression trained on the dataset. To compare driving behavior, these models use scores indicating by how much different scenario traversals violate each of 14 driving rules. The rules are interpretable and designed by subject-matter experts. First, we found that these rules were enough for these models to achieve a high classification accuracy on the dataset. Second, we found that the rulebook provides high interpretability without excessively sacrificing performance. Third, the data pointed to possible improvements in the rulebook and the rules, and to potential new rules. Fourth, we explored the interpretability vs performance trade-off by also training non-interpretable models such as a random forest. Finally, we make the dataset publicly available to encourage a discussion from the wider community on behavior specification for AVs. Please find it at github.com/bassam-motional/Reasonable-Crowd.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Formalizing Traffic Rules for Machine Interpretability

Autonomous vehicles need to be designed to abide by the same rules that ...
research
12/25/2021

Pedagogical Rule Extraction for Learning Interpretable Models

Machine-learning models are ubiquitous. In some domains, for instance, i...
research
06/04/2019

An interpretable machine learning framework for modelling human decision behavior

Machine learning has recently been widely adopted to address the manager...
research
03/10/2021

GRIT: Verifiable Goal Recognition for Autonomous Driving using Decision Trees

It is useful for autonomous vehicles to have the ability to infer the go...
research
04/03/2023

Integrated Decision-Making and Control for Urban Autonomous Driving with Traffic Rules Compliance

In urban driving scenarios, autonomous vehicles are expected to conform ...
research
12/05/2018

MLIC: A MaxSAT-Based framework for learning interpretable classification rules

The wide adoption of machine learning approaches in the industry, govern...
research
08/26/2023

A Comparative Conflict Resolution Dataset Derived from Argoverse-2: Scenarios with vs. without Autonomous Vehicles

As the deployment of autonomous vehicles (AVs) becomes increasingly prev...

Please sign up or login with your details

Forgot password? Click here to reset