Measuring Conversational Fluidity in Automated Dialogue Agents

10/25/2019
by   Keith Vella, et al.
0

We present an automated evaluation method to measure fluidity in conversational dialogue systems. The method combines various state of the art Natural Language tools into a classifier, and human ratings on these dialogues to train an automated judgment model. Our experiments show that the results are an improvement on existing metrics for measuring fluidity.

READ FULL TEXT
research
09/26/2019

Towards a Metric for Automated Conversational Dialogue System Evaluation and Improvement

We present "AutoJudge", an automated evaluation method for conversationa...
research
10/16/2021

On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark

Dialogue safety problems severely limit the real-world deployment of neu...
research
05/16/2023

Mirages: On Anthropomorphism in Dialogue Systems

Automated dialogue or conversational systems are anthropomorphised by de...
research
06/10/2020

Towards Unified Dialogue System Evaluation: A Comprehensive Analysis of Current Evaluation Protocols

As conversational AI-based dialogue management has increasingly become a...
research
05/16/2018

A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation

Natural language generation lies at the core of generative dialogue syst...
research
02/17/2019

An Automated Testing Framework for Conversational Agents

Conversational agents are systems with a conversational interface that a...
research
12/23/2021

Measuring Attribution in Natural Language Generation Models

With recent improvements in natural language generation (NLG) models for...

Please sign up or login with your details

Forgot password? Click here to reset