A Study on Fairness and Trust Perceptions in Automated Decision Making

03/08/2021
by   Jakob Schoeffer, et al.
0

Automated decision systems are increasingly used for consequential decision making – for a variety of reasons. These systems often rely on sophisticated yet opaque models, which do not (or hardly) allow for understanding how or why a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield undesirable (e.g., unfair) outcomes because their sanity is difficult to assess and calibrate in the first place. In this work, we conduct a study to evaluate different attempts of explaining such systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms. A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.

READ FULL TEXT

page 10

page 11

research
04/29/2022

A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making

Automated decision systems (ADS) are increasingly used for consequential...
research
09/13/2021

Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making

Automated decision systems (ADS) have become ubiquitous in many high-sta...
research
05/10/2023

A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making Systems

Prediction-based decision-making systems are becoming increasingly preva...
research
07/23/2022

Causal Fairness Analysis

Decision-making systems based on AI and machine learning have been used ...
research
04/16/2018

Decision Provenance: Capturing data flow for accountable systems

Demand is growing for more accountability in the technological systems t...

Please sign up or login with your details

Forgot password? Click here to reset