The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions

02/10/2021
by   Juliana Jansen Ferreira, et al.
0

The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.

READ FULL TEXT
research
03/03/2020

Evidence-based explanation to promote fairness in AI systems

As Artificial Intelligence (AI) technology gets more intertwined with ev...
research
02/01/2021

Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

How to attribute responsibility for autonomous artificial intelligence (...
research
02/22/2023

Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation

Research exploring how to support decision-making has often used machine...
research
06/06/2022

Predicting and Understanding Human Action Decisions during Skillful Joint-Action via Machine Learning and Explainable-AI

This study uses supervised machine learning (SML) and explainable artifi...
research
12/06/2018

The Role of Normware in Trustworthy and Explainable AI

For being potentially destructive, in practice incomprehensible and for ...
research
04/13/2021

LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

Artificial Intelligence (AI) has a tremendous impact on the unexpected g...
research
05/07/2020

Visualisation and knowledge discovery from interpretable models

Increasing number of sectors which affect human lives, are using Machine...

Please sign up or login with your details

Forgot password? Click here to reset