Evaluating Empathy in Artificial Agents

08/14/2019
by   Özge Nilay Yalçın, et al.
0

The novel research area of computational empathy is in its infancy and moving towards developing methods and standards. One major problem is the lack of agreement on the evaluation of empathy in artificial interactive systems. Even though the existence of well-established methods from psychology, psychiatry and neuroscience, the translation between these methods and computational empathy is not straightforward. It requires a collective effort to develop metrics that are more suitable for interactive artificial agents. This paper is aimed as an attempt to initiate the dialogue on this important problem. We examine the evaluation methods for empathy in humans and provide suggestions for the development of better metrics to evaluate empathy in artificial agents. We acknowledge the difficulty of arriving at a single solution in a vast variety of interactive systems and propose a set of systematic approaches that can be used with a variety of applications and systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2022

Evaluating Multimodal Interactive Agents

Creating agents that can interact naturally with humans is a common goal...
research
03/11/2023

Parachute: Evaluating Interactive Human-LM Co-writing Systems

A surge of advances in language models (LMs) has led to significant inte...
research
08/03/2021

How to Evaluate Your Dialogue Models: A Review of Approaches

Evaluating the quality of a dialogue system is an understudied problem. ...
research
10/19/2022

Proposal of a dialogue system using Mecab

In recent years, artificial intelligence has developed in a variety of f...
research
09/26/2018

Towards Game-based Metrics for Computational Co-creativity

We propose the following question: what game-like interactive system wou...
research
10/06/2022

Artificial virtuous agents in a multiagent tragedy of the commons

Although virtue ethics has repeatedly been proposed as a suitable framew...
research
04/26/2017

The MacGyver Test - A Framework for Evaluating Machine Resourcefulness and Creative Problem Solving

Current measures of machine intelligence are either difficult to evaluat...

Please sign up or login with your details

Forgot password? Click here to reset