A Typology to Explore and Guide Explanatory Interactive Machine Learning

03/04/2022
by   Felix Friedrich, et al.
57

Recently, more and more eXplanatory Interactive machine Learning (XIL) methods have been proposed with the goal of extending a model's learning process by integrating human user supervision on the model's explanations. These methods were often developed independently, provide different motivations and stem from different applications. Notably, up to now, there has not been a comprehensive evaluation of these works. By identifying a common set of basic modules and providing a thorough discussion of these modules, our work, for the first time, comes up with a unification of the various methods into a single typology. This typology can thus be used to categorize existing and future XIL methods based on the identified modules. Moreover, our work contributes by surveying six existing XIL methods. In addition to benchmarking these methods on their overall ability to revise a model, we perform additional benchmarks regarding wrong reason revision, interaction efficiency, robustness to feedback quality, and the ability to revise a strongly corrupted model. Apart from introducing these novel benchmarking tasks, for improved quantitative evaluations, we further introduce a novel Wrong Reason () metric which measures the average wrong reason activation in a model's explanations to complement a qualitative inspection. In our evaluations, all methods prove to revise a model successfully. However, we found significant differences between the methods on individual benchmark tasks, revealing valuable application-relevant aspects not only for comparing current methods but also to motivate the necessity of incorporating these benchmarks in the development of future XIL methods.

READ FULL TEXT

page 9

page 12

page 18

research
09/26/2022

Impact of Feedback Type on Explanatory Interactive Learning

Explanatory Interactive Learning (XIL) collects user feedback on visual ...
research
09/21/2020

Machine Guides, Human Supervises: Interactive Learning with Global Explanations

We introduce explanatory guided learning (XGL), a novel interactive lear...
research
01/20/2019

Quantifying Interpretability and Trust in Machine Learning Systems

Decisions by Machine Learning (ML) models have become ubiquitous. Trusti...
research
07/12/2023

Learning from Exemplary Explanations

eXplanation Based Learning (XBL) is a form of Interactive Machine Learni...
research
07/15/2020

On quantitative aspects of model interpretability

Despite the growing body of work in interpretable machine learning, it r...
research
05/01/2020

The Grammar of Interactive Explanatory Model Analysis

When analysing a complex system, very often an answer for one question r...
research
10/24/2018

Why every GBDT speed benchmark is wrong

This article provides a comprehensive study of different ways to make sp...

Please sign up or login with your details

Forgot password? Click here to reset