A Conceptual Framework for Establishing Trust in Real World Intelligent Systems

04/12/2021
by   Michael Guckert, et al.
0

Intelligent information systems that contain emergent elements often encounter trust problems because results do not get sufficiently explained and the procedure itself can not be fully retraced. This is caused by a control flow depending either on stochastic elements or on the structure and relevance of the input data. Trust in such algorithms can be established by letting users interact with the system so that they can explore results and find patterns that can be compared with their expected solution. Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns and may increase the trust that a user has in the solution. If expectations are not met, close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected. By either accepting or rejecting a solution, the user's set of expectations evolves and a learning process for the users is established. In this paper we present a conceptual framework that reflects and supports this process. The framework is the result of an analysis of two exemplary case studies from two different disciplines with information systems that assist experts in their complex tasks.

READ FULL TEXT

Authors

page 5

page 8

page 9

08/20/2020

The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems

Domain-specific intelligent systems are meant to help system users in th...
03/24/2022

Impacts of Personal Characteristics on User Trust in Conversational Recommender Systems

Conversational recommender systems (CRSs) imitate human advisors to assi...
04/13/2022

EMMI: Empathic Human-Machine Interaction for Establishing Trust in Automated Driving

Highly automated vehicles represent one of the most crucial development ...
01/17/2020

Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems

We explore trust in a relatively new area of data science: Automated Mac...
12/07/2018

Dice in the Black Box: User Experiences with an Inscrutable Algorithm

We demonstrate that users may be prone to place an inordinate amount of ...
12/05/2021

Toward a Taxonomy of Trust for Probabilistic Machine Learning

Probabilistic machine learning increasingly informs critical decisions i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.