Making an agent's trust stable in a series of success and failure tasks through empathy

06/15/2023
by   Takahiro Tsumura, et al.
0

As AI technology develops, trust in AI agents is becoming more important for more AI applications in human society. Possible ways to improve the trust relationship include empathy, success-failure series, and capability (performance). Appropriate trust is less likely to cause deviations between actual and ideal performance. In this study, we focus on the agent's empathy and success-failure series to increase trust in AI agents. We experimentally examine the effect of empathy from agent to person on changes in trust over time. The experiment was conducted with a two-factor mixed design: empathy (available, not available) and success-failure series (phase 1 to phase 5). An analysis of variance (ANOVA) was conducted using data from 198 participants. The results showed an interaction between the empathy factor and the success-failure series factor, with trust in the agent stabilizing when empathy was present. This result supports our hypothesis. This study shows that designing AI agents to be empathetic is an important factor for trust and helps humans build appropriate trust relationships with AI agents.

READ FULL TEXT
research
05/19/2021

More Similar Values, More Trust? – the Effect of Value Similarity on Trust in Human-Agent Interaction

As AI systems are increasingly involved in decision making, it also beco...
research
06/18/2021

Facilitation of human empathy through self-disclosure of anthropomorphic agents

As AI technologies progress, social acceptance of AI agents including in...
research
12/27/2022

Measuring an artificial intelligence agent's trust in humans using machine incentives

Scientists and philosophers have debated whether humans can trust advanc...
research
06/13/2022

Agents facilitate one category of human empathy through task difficulty

One way to improve the relationship between humans and anthropomorphic a...
research
04/01/2023

Experimental Investigation of Robotic virtual agent's errors that are accepted by reaction and body color selection

One way to improve the relationship between humans and anthropomorphic a...
research
04/29/2022

Human-in-the-loop online multi-agent approach to increase trustworthiness in ML models through trust scores and data augmentation

Increasing a ML model accuracy is not enough, we must also increase its ...
research
10/10/2020

Towards a Conversational Measure of Trust

The increasingly collaborative decision-making process between humans an...

Please sign up or login with your details

Forgot password? Click here to reset