LEx: A Framework for Operationalising Layers of Machine Learning Explanations

04/15/2021
by   Ronal Singh, et al.
0

Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the layers of explanation (LEx), a lens through which we can assess the appropriateness of different types of explanations. The framework uses the notions of sensitivity (emotional responsiveness) of features and the level of stakes (decision's consequence) in a domain to determine whether different types of explanations are appropriate in a given context. We demonstrate how to use the framework to assess the appropriateness of different types of explanations in different domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2021

Explanatory Pluralism in Explainable AI

The increasingly widespread application of AI models motivates increased...
research
01/05/2021

Explainable AI and Adoption of Algorithmic Advisors: an Experimental Study

Machine learning is becoming a commonplace part of our technological exp...
research
06/12/2023

Wise in Vaccine Allocation

The paper uses machine learning and mathematical modeling to predict fut...
research
06/09/2021

A general approach for Explanations in terms of Middle Level Features

Nowadays, it is growing interest to make Machine Learning (ML) systems m...
research
09/24/2019

LitGen: Genetic Literature Recommendation Guided by Human Explanations

As genetic sequencing costs decrease, the lack of clinical interpretatio...
research
01/22/2021

Deepfakes and the 2020 US elections: what (did not) happen

Alarmed by the volume of disinformation that was assumed to have taken p...

Please sign up or login with your details

Forgot password? Click here to reset