One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

01/27/2020
by   Kacper Sokol, et al.
46

The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations – a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats...

READ FULL TEXT
research
05/04/2020

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

Systems based on artificial intelligence and machine learning models sho...
research
10/06/2020

Efficient computation of contrastive explanations

With the increasing deployment of machine learning systems in practice, ...
research
09/08/2022

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

Explainability techniques for data-driven predictive models based on art...
research
02/15/2022

Explainable Predictive Process Monitoring: A User Evaluation

Explainability is motivated by the lack of transparency of black-box Mac...
research
01/25/2022

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

Existing and planned legislation stipulates various obligations to provi...
research
06/10/2021

On the overlooked issue of defining explanation objectives for local-surrogate explainers

Local surrogate approaches for explaining machine learning model predict...
research
12/10/2020

Influence-Driven Explanations for Bayesian Network Classifiers

One of the most pressing issues in AI in recent years has been the need ...

Please sign up or login with your details

Forgot password? Click here to reset