On Evaluating and Comparing Conversational Agents

01/11/2018
by   Anu Venkatesh, et al.
0

Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2018

Conversational AI: The Science Behind the Alexa Prize

Conversational agents are exploding in popularity. However, much work re...
research
12/29/2020

Can You be More Social? Injecting Politeness and Positivity into Task-Oriented Conversational Agents

Goal-oriented conversational agents are becoming prevalent in our daily ...
research
09/10/2020

Emora: An Inquisitive Social Chatbot Who Cares For You

Inspired by studies on the overwhelming presence of experience-sharing i...
research
01/12/2022

Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents

At the heart of improving conversational AI is the open problem of how t...
research
09/18/2018

Talking to myself: self-dialogues as data for conversational agents

Conversational agents are gaining popularity with the increasing ubiquit...
research
01/31/2019

The Second Conversational Intelligence Challenge (ConvAI2)

We describe the setting and results of the ConvAI2 NeurIPS competition t...
research
10/05/2020

Spot The Bot: A Robust and Efficient Framework for the Evaluation of Conversational Dialogue Systems

The lack of time-efficient and reliable evaluation methods hamper the de...

Please sign up or login with your details

Forgot password? Click here to reset