Does the explanation satisfy your needs?: A unified view of properties of explanations

11/10/2022
by   Zixi Chen, et al.
0

Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.

READ FULL TEXT
research
01/03/2019

Personalized explanation in machine learning

Explanation in machine learning and related fields such as artificial in...
research
02/02/2018

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...
research
06/10/2020

OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms

Local Interpretable Model-Agnostic Explanations (LIME) is a popular meth...
research
09/21/2020

Survey of explainable machine learning with visual and granular methods beyond quasi-explanations

This paper surveys visual methods of explainability of Machine Learning ...
research
04/09/2021

Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation

Machine learning methods are being increasingly applied in sensitive soc...
research
08/12/2022

Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights

There are many different methods in the literature for local explanation...
research
12/13/2022

On the Relationship Between Explanation and Prediction: A Causal View

Explainability has become a central requirement for the development, dep...

Please sign up or login with your details

Forgot password? Click here to reset