A psychophysics approach for quantitative comparison of interpretable computer vision models

11/24/2019
by   Felix Biessmann, et al.
0

The field of transparent Machine Learning (ML) has contributed many novel methods aiming at better interpretability for computer vision and ML models in general. But how useful the explanations provided by transparent ML methods are for humans remains difficult to assess. Most studies evaluate interpretability in qualitative comparisons, they use experimental paradigms that do not allow for direct comparisons amongst methods or they report only offline experiments with no humans in the loop. While there are clear advantages of evaluations with no humans in the loop, such as scalability, reproducibility and less algorithmic bias than with humans in the loop, these metrics are limited in their usefulness if we do not understand how they relate to other metrics that take human cognition into account. Here we investigate the quality of interpretable computer vision algorithms using techniques from psychophysics. In crowdsourced annotation tasks we study the impact of different interpretability approaches on annotation accuracy and task time. In order to relate these findings to quality measures for interpretability without humans in the loop we compare quality metrics with and without humans in the loop. Our results demonstrate that psychophysical experiments allow for robust quality assessment of transparency in machine learning. Interestingly the quality metrics computed without humans in the loop did not provide a consistent ranking of interpretability methods nor were they representative for how useful an explanation was for humans. These findings highlight the potential of methods from classical psychophysics for modern machine learning applications. We hope that our results provide convincing arguments for evaluating interpretability in its natural habitat, human-ML interaction, if the goal is to obtain an authentic assessment of interpretability.

READ FULL TEXT
research
07/01/2021

Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated

The field explainable artificial intelligence (XAI) has brought about an...
research
01/20/2019

Quantifying Interpretability and Trust in Machine Learning Systems

Decisions by Machine Learning (ML) models have become ubiquitous. Trusti...
research
06/28/2021

Towards Model-informed Precision Dosing with Expert-in-the-loop Machine Learning

Machine Learning (ML) and its applications have been transforming our li...
research
05/29/2018

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prio...
research
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
research
07/15/2020

On quantitative aspects of model interpretability

Despite the growing body of work in interpretable machine learning, it r...
research
06/10/2019

Sampling Humans for Optimizing Preferences in Coloring Artwork

Many circumstances of practical importance have performance or success m...

Please sign up or login with your details

Forgot password? Click here to reset