"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making

by   Jakob Schoeffer, et al.

Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. In this work, we conduct a human subject study to assess people's perceptions of informational fairness (i.e., whether people think they are given adequate information on and explanation of the process and its outcomes) and trustworthiness of an underlying ADS when provided with varying types of information about the system. More specifically, we instantiate an ADS in the area of automated loan approval and generate different explanations that are commonly used in the literature. We randomize the amount of information that study participants get to see by providing certain groups of people with the same explanations as others plus additional explanations. From our quantitative analyses, we observe that different amounts of information as well as people's (self-assessed) AI literacy significantly influence the perceived informational fairness, which, in turn, positively relates to perceived trustworthiness of the ADS. A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations, among which are (i) consistency (both with people's expectations and across different explanations), (ii) disclosure of monotonic relationships between features and outcome, and (iii) actionability of recommendations.


page 14

page 15


On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Explanations have been framed as an essential feature for better and fai...

Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making

Automated decision systems (ADS) have become ubiquitous in many high-sta...

A Study on Fairness and Trust Perceptions in Automated Decision Making

Automated decision systems are increasingly used for consequential decis...

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

Several social factors impact how people respond to AI explanations used...

Deceptive AI Systems That Give Explanations Are Just as Convincing as Honest AI Systems in Human-Machine Decision Making

The ability to discern between true and false information is essential t...

Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

Ensuring fairness of machine learning systems is a human-in-the-loop pro...

Please sign up or login with your details

Forgot password? Click here to reset