The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study

02/27/2022
by   Claire Woodcock, et al.
0

To achieve the promoted benefits of an AI symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople. This study ascertains whether explanations provided by a symptom checker affect explanatory trust among laypeople (N=750) and whether this trust is impacted by their existing knowledge of disease. Results suggest system builders developing explanations for symptom-checking apps should consider the recipient's knowledge of a disease and tailor explanations to each user's specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.

READ FULL TEXT
research
10/15/2018

Towards Providing Explanations for AI Planner Decisions

In order to engender trust in AI, humans must understand what an AI syst...
research
04/26/2022

User Trust on an Explainable AI-based Medical Diagnosis Support System

Recent research has supported that system explainability improves user t...
research
10/07/2021

From the Head or the Heart? An Experimental Design on the Impact of Explanation on Cognitive and Affective Trust

Automated vehicles (AVs) are social robots that can potentially benefit ...
research
07/21/2023

Providing personalized Explanations: a Conversational Approach

The increasing applications of AI systems require personalized explanati...
research
02/20/2020

Do you comply with AI? – Personalized explanations of learning algorithms and their impact on employees' compliance behavior

Machine Learning algorithms are technological key enablers for artificia...
research
06/23/2014

A Unified Quantitative Model of Vision and Audition

We have put forwards a unified quantitative framework of vision and audi...
research
05/02/2022

TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

Despite AI's significant growth, its "black box" nature creates challeng...

Please sign up or login with your details

Forgot password? Click here to reset