A probabilistic theory of trust concerning artificial intelligence: can intelligent robots trust humans?

07/26/2022
by   Saleh Afroogh, et al.
0

In this paper, I argue for a probabilistic theory of trust, and the plausibility of “trustworthy AI” in which we trust (as opposed to mere reliance). I show that the current trust theories cannot accommodate trust pertaining to AI, and I propose an alternative probabilistic theory, which accounts for the four major types of AI-related trust: an AI agent’s trust in another AI agent, a human agent’s trust in an AI agent, an AI agent’s trust in a human agent, and an AI agent’s trust in an object (including mental and complex objects). I draw a broadly neglected distinction between transitive and intransitive senses of trust, each of which calls for a distinctive semantical theory. Based on this distinction, I classify the current theories into the theories of trust and theories of trustworthiness, showing that the current theories fail to model some of the major types of AI-related trust; while the proposed conditional probabilistic theory of trust and theory of trustworthiness, unlike the current trust theories, is scalable, and they would also accommodate major types of trust in non-AI, including interpersonal trust, reciprocal trust, one-sided trust, as well as trust in objects—e.g., thoughts, theories, data, algorithms, systems, and institutions.

READ FULL TEXT
research
01/29/2023

A Mental Model Based Theory of Trust

Handling trust is one of the core requirements for facilitating effectiv...
research
08/17/2023

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Whether current or near-term AI systems could be conscious is a topic of...
research
02/10/2022

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

The problem of human trust in artificial intelligence is one of the most...
research
05/06/2022

Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction

When a human receives a prediction or recommended course of action from ...
research
10/24/2018

Toward an AI Physicist for Unsupervised Learning

We investigate opportunities and challenges for improving unsupervised m...
research
05/16/2019

Reasoning about Cognitive Trust in Stochastic Multiagent Systems

We consider the setting of stochastic multiagent systems modelled as sto...

Please sign up or login with your details

Forgot password? Click here to reset