A probabilistic theory of trust concerning artificial intelligence: can intelligent robots trust humans?

07/26/2022
by   Saleh Afroogh, et al.
0

In this paper, I argue for a probabilistic theory of trust, and the plausibility of “trustworthy AI” in which we trust (as opposed to mere reliance). I show that the current trust theories cannot accommodate trust pertaining to AI, and I propose an alternative probabilistic theory, which accounts for the four major types of AI-related trust: an AI agent’s trust in another AI agent, a human agent’s trust in an AI agent, an AI agent’s trust in a human agent, and an AI agent’s trust in an object (including mental and complex objects). I draw a broadly neglected distinction between transitive and intransitive senses of trust, each of which calls for a distinctive semantical theory. Based on this distinction, I classify the current theories into the theories of trust and theories of trustworthiness, showing that the current theories fail to model some of the major types of AI-related trust; while the proposed conditional probabilistic theory of trust and theory of trustworthiness, unlike the current trust theories, is scalable, and they would also accommodate major types of trust in non-AI, including interpersonal trust, reciprocal trust, one-sided trust, as well as trust in objects—e.g., thoughts, theories, data, algorithms, systems, and institutions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro