Trust is not all about performance: trust biases in interaction with humans, robots and computers
Trust is essential for sustaining cooperation among humans. The same principle applies during interaction with computers and robots: if we do not trust them, we will not accept help from them. Extensive evidence has shown that our trust in other agents depends on their performance. However, in uncertain environments, humans may not be able to estimate correctly other agents' performance, potentially leading to distrust or over-trust in peers and machines. In the current study, we investigate whether humans' trust towards peers, computers and robots is biased by prior beliefs in uncertain interactive settings. Participants made perceptual judgments and observed the simulated estimates of either a human participant, a computer or a social robot. Participants could modify their judgments based on this feedback. Results show that participants' belief about the nature of the interacting partner biased their compliance with the partners' judgments, although the partners' judgments were identical. Surprisingly, the social robot was trusted more than the computer and the human partner. Trust in the alleged human partner was not fully predicted by its perceived performance, suggesting the emergence of normative processes in peer interaction. Our findings offer novel insights in the understanding of the mechanisms underlying trust towards peers and autonomous agents.
READ FULL TEXT