HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning

06/02/2022
by   Michael T. Lash, et al.
0

The use of machine learning (ML) models in decision-making contexts, particularly those used in high-stakes decision-making, are fraught with issue and peril since a person - not a machine - must ultimately be held accountable for the consequences of the decisions made using such systems. Machine learning explainability (MLX) promises to provide decision-makers with prediction-specific rationale, assuring them that the model-elicited predictions are made for the right reasons and are thus reliable. Few works explicitly consider this key human-in-the-loop (HITL) component, however. In this work we propose HEX, a human-in-the-loop deep reinforcement learning approach to MLX. HEX incorporates 0-distrust projection to synthesize decider specific explanation-providing policies from any arbitrary classification model. HEX is also constructed to operate in limited or reduced training data scenarios, such as those employing federated learning. Our formulation explicitly considers the decision boundary of the ML model in question, rather than the underlying training data, which is a shortcoming of many model-agnostic MLX methods. Our proposed methods thus synthesize HITL MLX policies that explicitly capture the decision boundary of the model in question for use in limited data scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2020

Impact of Legal Requirements on Explainability in Machine Learning

The requirements on explainability imposed by European laws and their im...
research
09/03/2020

Explainable Empirical Risk Minimization

The widespread use of modern machine learning methods in decision making...
research
11/27/2020

Teaching the Machine to Explain Itself using Domain Knowledge

Machine Learning (ML) has been increasingly used to aid humans to make b...
research
07/19/2022

Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins

Digital Twins (DT) are essentially Dynamic Data-driven models that serve...
research
04/12/2021

Understanding Prediction Discrepancies in Machine Learning Classifiers

A multitude of classifiers can be trained on the same data to achieve si...
research
01/24/2023

Explainable Deep Reinforcement Learning: State of the Art and Challenges

Interpretability, explainability and transparency are key issues to intr...
research
07/01/2020

Personalization of Hearing Aid Compression by Human-In-Loop Deep Reinforcement Learning

Existing prescriptive compression strategies used in hearing aid fitting...

Please sign up or login with your details

Forgot password? Click here to reset