DeepAI AI Chat
Log In Sign Up

Real-Time Counterfactual Explanations For Robotic Systems With Multiple Continuous Outputs

by   Vilde B. Gjærum, et al.

Although many machine learning methods, especially from the field of deep learning, have been instrumental in addressing challenges within robotic applications, we cannot take full advantage of such methods before these can provide performance and safety guarantees. The lack of trust that impedes the use of these methods mainly stems from a lack of human understanding of what exactly machine learning models have learned, and how robust their behaviour is. This is the problem the field of explainable artificial intelligence aims to solve. Based on insights from the social sciences, we know that humans prefer contrastive explanations, i.e. explanations answering the hypothetical question "what if?". In this paper, we show that linear model trees are capable of producing answers to such questions, so-called counterfactual explanations, for robotic systems, including in the case of multiple, continuous inputs and outputs. We demonstrate the use of this method to produce counterfactual explanations for two robotic applications. Additionally, we explore the issue of infeasibility, which is of particular interest in systems governed by the laws of physics.


page 1

page 2

page 3

page 4


DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations With Distribution-Aware Autoencoder Loss

Deep Learning has become a very valuable tool in different fields, and n...

A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations

Counterfactual explanations are a prominent example of post-hoc interpre...

Explaining Recommendation System Using Counterfactual Textual Explanations

Currently, there is a significant amount of research being conducted in ...

Towards Transparent Robotic Planning via Contrastive Explanations

Providing explanations of chosen robotic actions can help to increase th...

A Learning-Theoretic Framework for Certified Auditing of Machine Learning Models

Responsible use of machine learning requires that models be audited for ...

The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?

This article aims to provide a theoretical account and corresponding par...

A Surrogate Model Framework for Explainable Autonomous Behaviour

Adoption and deployment of robotic and autonomous systems in industry ar...