Retrieve and Refine: Improved Sequence Generation Models For Dialogue

by   Jason Weston, et al.

Sequence generation models for dialogue are known to have several problems: they tend to produce short, generic sentences that are uninformative and unengaging. Retrieval models on the other hand can surface interesting responses, but are restricted to the given retrieval set leading to erroneous replies that cannot be tuned to the specific context. In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it -- the final sequence generator treating the retrieval as additional context. We show on the recent CONVAI2 challenge task our approach produces responses superior to both standard retrieval and generation models in human evaluations.


page 1

page 2

page 3

page 4


Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory

For dialogue response generation, traditional generative models generate...

DeepCopy: Grounded Response Generation with Hierarchical Pointer Networks

Recent advances in neural sequence-to-sequence models have led to promis...

Reason first, then respond: Modular Generation for Knowledge-infused Dialogue

Large language models can produce fluent dialogue but often hallucinate ...

Retrieve Memorize: Dialog Policy Learning with Multi-Action Memory

Dialogue policy learning, a subtask that determines the content of syste...

N-best Response-based Analysis of Contradiction-awareness in Neural Response Generation Models

Avoiding the generation of responses that contradict the preceding conte...

A Dataset for Sentence Retrieval for Open-Ended Dialogues

We address the task of sentence retrieval for open-ended dialogues. The ...

Image Transformation Sequence Retrieval with General Reinforcement Learning

In this work, the novel Image Transformation Sequence Retrieval (ITSR) t...

Please sign up or login with your details

Forgot password? Click here to reset