Unsupervised Injection of Knowledge into Dialogue Generation via Language Models

04/30/2020
by   Yi-Lin Tuan, et al.
0

Neural conversation models have shown the power to produce more meaningful and engaging responses given external knowledge. Specifically, the knowledge we experiment on is in textual form, for example, a personality description. Despite the success of training and testing with external knowledge, in reality, we do not always have sufficient background knowledge about the discussed topic. Therefore, it is also crucial to have the models generate captivating responses without external knowledge. To achieve this, we propose a unified training method, Decoupling, which induces a knowledge-related sentence and couples it with the dialogue history to generate a response in an unsupervised fashion. Its effect is further analyzed by testing the models with no knowledge, partial and full text of the knowledge. Empirically, we observed that the variance of the performance given different amounts of knowledge is significant. Also, our method performs more closely to the supervised method (the upper bound) than the baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2019

Learning to Select Knowledge for Response Generation in Dialog Systems

Generating informative responses in end-to-end neural dialogue systems a...
research
06/28/2022

SINC: Service Information Augmented Open-Domain Conversation

Generative open-domain dialogue systems can benefit from external knowle...
research
08/31/2021

Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection

Knowledge-grounded dialogue is a task of generating a fluent and informa...
research
02/12/2023

Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue

With the power of large pretrained language models, various research wor...
research
03/20/2021

Overprotective Training Environments Fall Short at Testing Time: Let Models Contribute to Their Own Training

Despite important progress, conversational systems often generate dialog...
research
07/30/2023

User-Controlled Knowledge Fusion in Large Language Models: Balancing Creativity and Hallucination

In modern dialogue systems, the use of Large Language Models (LLMs) has ...
research
09/15/2023

"Merge Conflicts!" Exploring the Impacts of External Distractors to Parametric Knowledge Graphs

Large language models (LLMs) acquire extensive knowledge during pre-trai...

Please sign up or login with your details

Forgot password? Click here to reset