DeepAI AI Chat
Log In Sign Up

Transformer-Based Conditioned Variational Autoencoder for Dialogue Generation

by   Huihui Yang, et al.

In human dialogue, a single query may elicit numerous appropriate responses. The Transformer-based dialogue model produces frequently occurring sentences in the corpus since it is a one-to-one mapping function. CVAE is a technique for reducing generic replies. In this paper, we create a new dialogue model (CVAE-T) based on the Transformer with CVAE structure. We use a pre-trained MLM model to rewrite some key n-grams in responses to obtain a series of negative examples, and introduce a regularization term during training to explicitly guide the latent variable in learning the semantic differences between each pair of positive and negative examples. Experiments suggest that the method we design is capable of producing more informative replies.


page 1

page 2

page 3

page 4


DLVGen: A Dual Latent Variable Approach to Personalized Dialogue Generation

The generation of personalized dialogue is vital to natural and human-li...

Latent Intention Dialogue Models

Developing a dialogue agent that is capable of making autonomous decisio...

Diversifying Neural Dialogue Generation via Negative Distillation

Generative dialogue models suffer badly from the generic response proble...

Variational Transformers for Diverse Response Generation

Despite the great promise of Transformers in many sequence modeling task...

Mitigating Negative Style Transfer in Hybrid Dialogue System

As the functionality of dialogue systems evolves, hybrid dialogue system...

Challenging Instances are Worth Learning: Generating Valuable Negative Samples for Response Selection Training

Retrieval-based chatbot selects the appropriate response from candidates...

Generative Visual Dialogue System via Adaptive Reasoning and Weighted Likelihood Estimation

The key challenge of generative Visual Dialogue (VD) systems is to respo...