Individual Explanations in Machine Learning Models: A Survey for Practitioners

04/09/2021
by   Alfredo Carrillo, et al.
0

In recent years, the use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise. Although these models can often bring substantial improvements in the accuracy and efficiency of organizations, many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways. Hence, these models are often regarded as black-boxes, in the sense that their internal mechanisms can be opaque to human audit. In real-world applications, particularly in domains where decisions can have a sensitive impact–e.g., criminal justice, estimating credit scores, insurance risk, health risks, etc.–model interpretability is desired. Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models. This survey reviews the most relevant and novel methods that form the state-of-the-art for addressing the particular problem of explaining individual instances in machine learning. It seeks to provide a succinct review that can guide data science and machine learning practitioners in the search for appropriate methods to their problem domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2021

Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation

Machine learning methods are being increasingly applied in sensitive soc...
research
09/11/2020

Accurate and Intuitive Contextual Explanations using Linear Model Trees

With the ever-increasing use of complex machine learning models in criti...
research
10/14/2020

Data science in economics: comprehensive review of advanced machine learning and deep learning methods

This paper provides a state-of-the-art investigation of advances in data...
research
05/06/2019

Interpretable Automated Machine Learning in Maana(TM) Knowledge Platform

Machine learning is becoming an essential part of developing solutions f...
research
01/07/2020

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

The wide adoption of machine learning in the critical domains such as me...
research
10/05/2018

On the Art and Science of Machine Learning Explanations

This text discusses several explanatory methods that go beyond the error...
research
07/12/2022

Revealing Unfair Models by Mining Interpretable Evidence

The popularity of machine learning has increased the risk of unfair mode...

Please sign up or login with your details

Forgot password? Click here to reset