Log In Sign Up

Measuring and Improving Semantic Diversity of Dialogue Generation

by   Seungju Han, et al.

Response diversity has become an important criterion for evaluating the quality of open-domain dialogue generation models. However, current evaluation metrics for response diversity often fail to capture the semantic diversity of generated responses, as they mainly consider lexical aspects of the generated responses. In this paper, we introduce a new automatic evaluation metric to measure the semantic diversity of generated responses. Through human evaluation, we demonstrate that our proposed metric captures human judgments on response diversity better than existing lexical-level diversity metrics. Furthermore, motivated by analyzing an existing dialogue dataset, we propose a simple yet effective learning method that improves the semantic diversity of generated responses. Our learning method weights training samples based on the semantic distribution of the training set. We show that our learning method improves response diversity and coherency better than other baseline methods through automatic and human evaluation.


Semantic Diversity in Dialogue with Natural Language Inference

Generating diverse, interesting responses to chitchat conversations is a...

HSCJN: A Holistic Semantic Constraint Joint Network for Diverse Response Generation

The sequence-to-sequence (Seq2Seq) model generates target words iterativ...

Unifying Human and Statistical Evaluation for Natural Language Generation

How can we measure whether a natural language generation system produces...

AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses

Many sequence-to-sequence dialogue models tend to generate safe, uninfor...

DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations

Automatic evaluation metrics are essential for the rapid development of ...

Towards Objective Metrics for Procedurally Generated Video Game Levels

With increasing interest in procedural content generation by academia an...

Code Repositories


An official codebase for the paper, "Measuring and Improving Semantic Diversity of Dialogue Generation", EMNLP 2022 Findings

view repo