Following wrong suggestions: self-blame in human and computer scenarios

07/01/2019 ∙ by Andrea Beretta, et al. ∙ 0

This paper investigates the specific experience of following a suggestion by an intelligent machine that has a wrong outcome and the emotions people feel. By adopting a typical task employed in studies on decision-making, we presented participants with two scenarios in which they follow a suggestion and have a wrong outcome by either an expert human being or an intelligent machine. We found a significant decrease in the perceived responsibility on the wrong choice when the machine offers the suggestion. At present, few studies have investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system and the cooperation. This preliminary research has implications in the study of cooperation and decision making with intelligent machines. Further research may address how to offer the suggestion in order to better cope with user's self-blame.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.