When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games

07/22/2020
by   The Anh Han, et al.
0

The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, using such an agent involves the user exposing themselves to the risk that the agent may act in a way opposed to the user's goals. It is often argued that people use trust as a cognitive shortcut to reduce the complexity of such interactions. Here we formalise this by using the methods of evolutionary game theory to study the viability of trust-based strategies in repeated games. These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating. Unlike classic reciprocal strategies, once mutual cooperation has been observed for a threshold number of rounds they stop checking their co-player's behaviour every round, and instead only check with some probability. By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative. We demonstrate that these trust-based strategies can outcompete strategies that are always conditional, such as Tit-for-Tat, when the opportunity cost is non-negligible. We argue that this cost is likely to be greater when the interaction is between people and intelligent agents, because of the reduced transparency of the agent. Consequently, we expect people to use trust-based strategies more frequently in interactions with intelligent agents. Our results provide new, important insights into the design of mechanisms for facilitating interactions between humans and intelligent agents, where trust is an essential factor.

READ FULL TEXT

page 27

page 28

research
05/18/2023

Game Theory with Simulation of Other Players

Game-theoretic interactions with AI agents could differ from traditional...
research
12/23/2021

Should transparency be (in-)transparent? On monitoring aversion and cooperation in teams

Many modern organisations employ methods which involve monitoring of emp...
research
10/26/2021

Playing Repeated Coopetitive Polymatrix Games with Small Manipulation Cost

Repeated coopetitive games capture the situation when one must efficient...
research
05/16/2022

How do people incorporate advice from artificial agents when making physical judgments?

How do people build up trust with artificial agents? Here, we study a ke...
research
12/06/2021

A Synergy of Institutional Incentives and Networked Structures in Evolutionary Game Dynamics of Multi-agent Systems

Understanding the emergence of prosocial behaviours (e.g., cooperation a...
research
08/02/2021

Tuning Cooperative Behavior in Games with Nonlinear Opinion Dynamics

We examine the tuning of cooperative behavior in repeated multi-agent ga...

Please sign up or login with your details

Forgot password? Click here to reset