Open-Domain Dialogue Generation Based on Pre-trained Language Models

10/24/2020
by   Yan Zeng, et al.
0

Pre-trained language models have been successfully used in response generation for open-domain dialogue. Four main frameworks have been proposed: (1) Transformer-ED using Transformer encoder and decoder separately for source and target sentences; (2) Transformer-Dec using Transformer decoder for both source and target sentences; (3) Transformer-MLM using Transformer decoder that applies bi-directional attention on the source side and left-to-right attention on the target side with masked language model objective; and (4) Transformer-AR that uses auto-regressive objective instead. In this study, we compare these frameworks on 3 datasets, and our comparison reveals that the best framework uses bidirectional attention on the source side and does not separate encoder and decoder. We also examine model discrepancy, and our experiments confirm that the performance of a model is directly impacted by the underlying discrepancies. We then propose two correction methods to reduce the discrepancies, and both improve the model performance. These results show that discrepancies is an important factor to consider when we use a pre-trained model, and a reduction in discrepancies can lead to improved performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2019

Sample Efficient Text Summarization Using a Single Pre-Trained Transformer

Language model (LM) pre-training has resulted in impressive performance ...
research
09/15/2022

Stateful Memory-Augmented Transformers for Dialogue Modeling

Transformer encoder-decoder models have shown impressive performance in ...
research
09/05/2021

SideControl: Controlled Open-domain Dialogue Generation via Additive Side Networks

Transformer-based pre-trained language models boost the performance of o...
research
05/09/2023

Towards an Automatic Optimisation Model Generator Assisted with Generative Pre-trained Transformer

This article presents a framework for generating optimisation models usi...
research
04/23/2021

Transfer training from smaller language model

Large language models have led to state-of-the-art accuracies across a r...
research
10/21/2020

Generalized Conditioned Dialogue Generation Based on Pre-trained Language Model

We investigate the general problem of conditioned dialogue, in which a c...

Please sign up or login with your details

Forgot password? Click here to reset