Reason first, then respond: Modular Generation for Knowledge-infused Dialogue

11/09/2021
by   Leonard Adolphs, et al.
0

Large language models can produce fluent dialogue but often hallucinate factual inaccuracies. While retrieval-augmented models help alleviate this issue, they still face a difficult challenge of both reasoning to provide correct knowledge and generating conversation simultaneously. In this work, we propose a modular model, Knowledge to Response (K2R), for incorporating knowledge into conversational agents, which breaks down this problem into two easier steps. K2R first generates a knowledge sequence, given a dialogue context, as an intermediate step. After this "reasoning step", the model then attends to its own generated knowledge sequence, as well as the dialogue context, to produce a final response. In detailed experiments, we find that such a model hallucinates less in knowledge-grounded dialogue tasks, and has advantages in terms of interpretability and modularity. In particular, it can be used to fuse QA and dialogue systems together to enable dialogue agents to give knowledgeable answers, or QA models to give conversational responses in a zero-shot setting.

READ FULL TEXT
research
02/13/2023

PK-ICR: Persona-Knowledge Interactive Context Retrieval for Grounded Dialogue

Identifying relevant Persona or Knowledge for conversational systems is ...
research
04/15/2021

Retrieval Augmentation Reduces Hallucination in Conversation

Despite showing increasingly human-like conversational abilities, state-...
research
05/30/2023

Knowledge Graph-Augmented Language Models for Knowledge-Grounded Dialogue Generation

Language models have achieved impressive performances on dialogue genera...
research
05/01/2017

MACA: A Modular Architecture for Conversational Agents

We propose a software architecture designed to ease the implementation o...
research
03/21/2023

cTBL: Augmenting Large Language Models for Conversational Tables

An open challenge in multimodal conversational AI requires augmenting la...
research
08/14/2018

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

Sequence generation models for dialogue are known to have several proble...
research
12/28/2022

Improving a sequence-to-sequence nlp model using a reinforcement learning policy algorithm

Nowadays, the current neural network models of dialogue generation(chatb...

Please sign up or login with your details

Forgot password? Click here to reset