FairLens: Auditing Black-box Clinical Decision Support Systems

11/08/2020
by   Cecilia Panigutti, et al.
0

The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Explainable AI, but explainable to whom?

Advances in AI technologies have resulted in superior levels of AI-based...
research
05/04/2020

Construction and Elicitation of a Black Box Model in the Game of Bridge

We address the problem of building a decision model for a specific biddi...
research
03/07/2023

"If I Had All the Time in the World": Ophthalmologists' Perceptions of Anchoring Bias Mitigation in Clinical AI Support

Clinical needs and technological advances have resulted in increased use...
research
03/03/2022

Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

Prioritizing fairness is of central importance in artificial intelligenc...
research
10/27/2021

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

It has been rightfully emphasized that the use of AI for clinical decisi...
research
01/14/2018

Hire the Experts: Combinatorial Auction Based Scheme for Experts Selection in E-Healthcare

During the last decade, scheduling the healthcare services (such as staf...
research
03/08/2021

Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative

Automated decision support can accelerate tedious tasks as users can foc...

Please sign up or login with your details

Forgot password? Click here to reset