Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated

07/01/2021
by   Felix Biessmann, et al.
0

The field explainable artificial intelligence (XAI) has brought about an arsenal of methods to render Machine Learning (ML) predictions more interpretable. But how useful explanations provided by transparent ML methods are for humans remains difficult to assess. Here we investigate the quality of interpretable computer vision algorithms using techniques from psychophysics. In crowdsourced annotation tasks we study the impact of different interpretability approaches on annotation accuracy and task time. We compare these quality metrics with classical XAI, automated quality metrics. Our results demonstrate that psychophysical experiments allow for robust quality assessment of transparency in machine learning. Interestingly the quality metrics computed without humans in the loop did not provide a consistent ranking of interpretability methods nor were they representative for how useful an explanation was for humans. These findings highlight the potential of methods from classical psychophysics for modern machine learning applications. We hope that our results provide convincing arguments for evaluating interpretability in its natural habitat, human-ML interaction, if the goal is to obtain an authentic assessment of interpretability.

READ FULL TEXT

page 3

page 4

research
11/24/2019

A psychophysics approach for quantitative comparison of interpretable computer vision models

The field of transparent Machine Learning (ML) has contributed many nove...
research
06/28/2021

Towards Model-informed Precision Dosing with Expert-in-the-loop Machine Learning

Machine Learning (ML) and its applications have been transforming our li...
research
07/25/2019

HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop

While the role of humans is increasingly recognized in machine learning ...
research
11/20/2017

The Promise and Peril of Human Evaluation for Model Interpretability

Transparency, user trust, and human comprehension are popular ethical mo...
research
06/01/2023

SPINEX: Similarity-based Predictions and Explainable Neighbors Exploration for Regression and Classification Tasks in Machine Learning

The field of machine learning (ML) has witnessed significant advancement...
research
04/14/2022

Interpretability of Machine Learning Methods Applied to Neuroimaging

Deep learning methods have become very popular for the processing of nat...
research
01/27/2022

Using Shape Metrics to Describe 2D Data Points

Traditional machine learning (ML) algorithms, such as multiple regressio...

Please sign up or login with your details

Forgot password? Click here to reset