Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization

03/01/2022
by   Vilde B. Gjærum, et al.
19

Deep neural networks (DNNs) can be useful within the marine robotics field, but their utility value is restricted by their black-box nature. Explainable artificial intelligence methods attempt to understand how such black-boxes make their decisions. In this work, linear model trees (LMTs) are used to approximate the DNN controlling an autonomous surface vessel (ASV) in a simulated environment and then run in parallel with the DNN to give explanations in the form of feature attributions in real-time. How well a model can be understood depends not only on the explanation itself, but also on how well it is presented and adapted to the receiver of said explanation. Different end-users may need both different types of explanations, as well as different representations of these. The main contributions of this work are (1) significantly improving both the accuracy and the build time of a greedy approach for building LMTs by introducing ordering of features in the splitting of the tree, (2) giving an overview of the characteristics of the seafarer/operator and the developer as two different end-users of the agent and receiver of the explanations, and (3) suggesting a visualization of the docking agent, the environment, and the feature attributions given by the LMT for when the developer is the end-user of the system, and another visualization for when the seafarer or operator is the end-user, based on their different characteristics.

READ FULL TEXT

page 2

page 6

page 7

page 9

page 11

page 24

page 25

page 30

research
03/01/2022

Approximating a deep reinforcement learning docking agent using linear model trees

Deep reinforcement learning has led to numerous notable results in robot...
research
01/24/2023

ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents

As reinforcement learning methods increasingly amass accomplishments, th...
research
07/07/2022

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

Explainable artificial intelligence is a research field that tries to pr...
research
07/13/2021

Scalable, Axiomatic Explanations of Deep Alzheimer's Diagnosis from Heterogeneous Data

Deep Neural Networks (DNNs) have an enormous potential to learn from com...
research
04/21/2022

Perception Visualization: Seeing Through the Eyes of a DNN

Artificial intelligence (AI) systems power the world we live in. Deep ne...
research
05/30/2020

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...
research
09/10/2020

TripleTree: A Versatile Interpretable Representation of Black Box Agents and their Environments

In explainable artificial intelligence, there is increasing interest in ...

Please sign up or login with your details

Forgot password? Click here to reset