A New Data Normalization Method to Improve Dialogue Generation by Minimizing Long Tail Effect

05/04/2020
by   Zhiqiang Zhan, et al.
3

Recent neural models have shown significant progress in dialogue generation. Most generation models are based on language models. However, due to the Long Tail Phenomenon in linguistics, the trained models tend to generate words that appear frequently in training datasets, leading to a monotonous issue. To address this issue, we analyze a large corpus from Wikipedia and propose three frequency-based data normalization methods. We conduct extensive experiments based on transformers and three datasets respectively collected from social media, subtitles, and the industrial application. Experimental results demonstrate significant improvements in diversity and informativeness (defined as the numbers of nouns and verbs) of generated responses. More specifically, the unigram and bigram diversity are increased by 2.6 the three datasets, respectively. Moreover, the informativeness, i.e. the numbers of nouns and verbs, are increased by 4.0 respectively. Additionally, the simplicity and effectiveness enable our methods to be adapted to different generation models without much extra computational cost.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9

research
08/20/2020

Controlling Dialogue Generation with Semantic Exemplars

Dialogue systems pretrained with large language models generate locally ...
research
03/09/2020

An Empirical Investigation of Pre-Trained Transformer Language Models for Open-Domain Dialogue Generation

We present an empirical investigation of pre-trained Transformer-based a...
research
03/06/2022

Leashing the Inner Demons: Self-Detoxification for Language Models

Language models (LMs) can reproduce (or amplify) toxic language seen dur...
research
04/24/2023

ChatLLM Network: More brains, More intelligence

Dialogue-based language models mark a huge milestone in the field of art...
research
06/20/2021

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

Neural models trained for next utterance generation in dialogue task lea...
research
05/15/2022

Long-term Control for Dialogue Generation: Methods and Evaluation

Current approaches for controlling dialogue response generation are prim...
research
05/25/2023

Healing Unsafe Dialogue Responses with Weak Supervision Signals

Recent years have seen increasing concerns about the unsafe response gen...

Please sign up or login with your details

Forgot password? Click here to reset