Designing for Responsible Trust in AI Systems: A Communication Perspective

04/29/2022
by   Q Vera Liao, et al.
7

Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH, which describes how trustworthiness is communicated in AI systems through trustworthiness cues and how those cues are processed by people to make trust judgments. Besides AI-generated content, we highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users. By bringing to light the variety of users' cognitive processes to make trust judgments and their potential limitations, we urge technology creators to make conscious decisions in choosing reliable trustworthiness cues for target users and, as an industry, to regulate this space and prevent malicious use. Towards these goals, we define the concepts of warranted trustworthiness cues and expensive trustworthiness cues, and propose a checklist of requirements to help technology creators identify appropriate cues to use. We present a hypothetical use case to illustrate how practitioners can use MATCH to design AI systems responsibly, and discuss future directions for research and industry efforts aimed at promoting responsible trust in AI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2023

Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

We present an overview of the literature on trust in AI and AI trustwort...
research
10/15/2020

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

Trust is a central component of the interaction between people and AI, i...
research
06/21/2022

Embrace your incompetence! Designing appropriate CUI communication through an ecological approach

People form impressions of their dialogue partners, be they other people...
research
11/27/2019

The relationship between trust in AI and trustworthy machine learning technologies

To build AI-based systems that users and the public can justifiably trus...
research
05/02/2023

Fears about AI-mediated communication are grounded in different expectations for one's own versus others' use

The rapid development of AI-mediated communication technologies (AICTs),...
research
01/13/2023

The moral authority of ChatGPT

ChatGPT is not only fun to chat with, but it also searches information, ...
research
08/01/2022

The Many Facets of Trust in AI: Formalizing the Relation Between Trust and Fairness, Accountability, and Transparency

Efforts to promote fairness, accountability, and transparency are assume...

Please sign up or login with your details

Forgot password? Click here to reset