Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness

06/01/2022
by   Yuri Nakao, et al.
0

With Artificial intelligence (AI) to aid or automate decision-making advancing rapidly, a particular concern is its fairness. In order to create reliable, safe and trustworthy systems through human-centred artificial intelligence (HCAI) design, recent efforts have produced user interfaces (UIs) for AI experts to investigate the fairness of AI models. In this work, we provide a design space exploration that supports not only data scientists but also domain experts to investigate AI fairness. Using loan applications as an example, we held a series of workshops with loan officers and data scientists to elicit their requirements. We instantiated these requirements into FairHIL, a UI to support human-in-the-loop fairness investigations, and describe how this UI could be generalized to other use cases. We evaluated FairHIL through a think-aloud user study. Our work contributes better designs to investigate an AI model's fairness-and move closer towards responsible AI.

READ FULL TEXT
research
02/10/2023

Toward Human-Centered Responsible Artificial Intelligence: A Review of CHI Research and Industry Toolkits

As Artificial Intelligence (AI) continues to advance rapidly, it becomes...
research
04/22/2022

Towards Involving End-users in Interactive Human-in-the-loop AI Fairness

Ensuring fairness in artificial intelligence (AI) is important to counte...
research
02/09/2023

Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute

Local governments increasingly use artificial intelligence (AI) for auto...
research
03/29/2023

Fairlearn: Assessing and Improving Fairness of AI Systems

Fairlearn is an open source project to help practitioners assess and imp...
research
05/22/2023

Observations on LLMs for Telecom Domain: Capabilities and Limitations

The landscape for building conversational interfaces (chatbots) has witn...
research
02/19/2022

The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness

Computer scientists are trained to create abstractions that simplify and...
research
02/23/2021

Decision Rule Elicitation for Domain Adaptation

Human-in-the-loop machine learning is widely used in artificial intellig...

Please sign up or login with your details

Forgot password? Click here to reset