Facilitation of human empathy through self-disclosure of anthropomorphic agents

by   Takahiro Tsumura, et al.

As AI technologies progress, social acceptance of AI agents including intelligent virtual agents and robots is getting to be even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans take positive and kind actions toward agents, and emphasizing makes it easier for humans to accept agents. In this study, we focused on self-disclosure from agents to humans in order to realize anthropomorphic agents that elicit empathy from humans. Then, we experimentally investigated the possibility that an agent's self-disclosure facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy for agents. This experiment was conducted with a three-way mixed plan, and the factors were the agents' appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before and after a video stimulus. An analysis of variance was performed using data from 576 participants. As a result, we found that the appearance factor did not have a main effect, and self-disclosure, which is highly relevant to the scenario used, facilitated more human empathy with statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses.



There are no comments yet.


page 8

page 11

page 14


Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to human...

Designing a Safe Autonomous Artificial Intelligence Agent based on Human Self-Regulation

There is a growing focus on how to design safe artificial intelligent (A...

Designing nudge agents that promote human altruism

Previous studies have found that nudging is key to promoting altruism in...

Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

Classic evaluation methods of believable agents are time-consuming becau...

Self-Initiated Open World Learning for Autonomous AI Agents

As more and more AI agents are used in practice, it is time to think abo...

Collaborating with Humans without Human Data

Collaborating with humans requires rapidly adapting to their individual ...


Socially interactive agents (SIAs) are no longer mere visions for future...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.