Enhancing the Accuracy and Fairness of Human Decision Making

05/25/2018
by   Isabel Valera, et al.
0

Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions? In this paper, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation---selecting expert assignments which lead to accurate and fair decisions---and exploration---selecting expert assignments to learn about the experts' preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2019

FairSight: Visual Analytics for Fairness in Decision Making

Data-driven decision making related to individuals has become increasing...
research
01/15/2019

Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges

Machine learning algorithms are now frequently used in sensitive context...
research
02/08/2019

Improving Consequential Decision Making under Imperfect Predictions

Consequential decisions are increasingly informed by sophisticated data-...
research
08/11/2023

Learning to Guide Human Experts via Personalized Large Language Models

In learning to defer, a predictor identifies risky decisions and defers ...
research
04/07/2023

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

Traditional algorithmic fairness notions rely on label feedback, which c...
research
05/31/2023

Designing Closed-Loop Models for Task Allocation

Automatically assigning tasks to people is challenging because human per...
research
04/01/2021

The best laid plans or lack thereof: Security decision-making of different stakeholder groups

Cyber security requirements are influenced by the priorities and decisio...

Please sign up or login with your details

Forgot password? Click here to reset