Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation

04/09/2021
by   Alfredo Carrillo, et al.
0

Machine learning methods are being increasingly applied in sensitive societal contexts, where decisions impact human lives. Hence it has become necessary to build capabilities for providing easily-interpretable explanations of models' predictions. Recently in academic literature, a vast number of explanations methods have been proposed. Unfortunately, to our knowledge, little has been documented about the challenges machine learning practitioners most often face when applying them in real-world scenarios. For example, a typical procedure such as feature engineering can make some methodologies no longer applicable. The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods. And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain – poverty estimation and its use for prioritizing access to social policies.

READ FULL TEXT
research
04/09/2021

Individual Explanations in Machine Learning Models: A Survey for Practitioners

In recent years, the use of sophisticated statistical models that influe...
research
04/19/2022

A survey on improving NLP models with human explanations

Training a model with access to human explanations can improve data effi...
research
01/23/2023

Feature construction using explanations of individual predictions

Feature construction can contribute to comprehensibility and performance...
research
10/05/2018

On the Art and Science of Machine Learning Explanations

This text discusses several explanatory methods that go beyond the error...
research
05/27/2018

Semantic Explanations of Predictions

The main objective of explanations is to transmit knowledge to humans. T...
research
11/10/2022

Does the explanation satisfy your needs?: A unified view of properties of explanations

Interpretability provides a means for humans to verify aspects of machin...
research
06/09/2023

Consistent Explanations in the Face of Model Indeterminacy via Ensembling

This work addresses the challenge of providing consistent explanations f...

Please sign up or login with your details

Forgot password? Click here to reset