-
Learning to Select Knowledge for Response Generation in Dialog Systems
Generating informative responses in end-to-end neural dialogue systems a...
read it
-
Knowledge Aware Conversation Generation with Reasoning on Augmented Graph
Two types of knowledge, factoid knowledge from graphs and non-factoid kn...
read it
-
Content Selection Network for Document-grounded Retrieval-based Chatbots
Grounding human-machine conversation in a document is an effective way t...
read it
-
Conversational Word Embedding for Retrieval-Based Dialog System
Human conversations contain many types of information, e.g., knowledge, ...
read it
-
Knowledge Aware Conversation Generation with Explainable Reasoning on Augmented Graphs
Two types of knowledge, triples from knowledge graphs and texts from uns...
read it
-
Improving Background Based Conversation with Context-aware Knowledge Pre-selection
Background Based Conversations (BBCs) have been developed to make dialog...
read it
-
An Novel Explicit Method to Solve Linear Dispersive Media for Finite Difference Time Domain Scheme
A novel explicit method to model Lorentz linear dispersive media with fi...
read it
Difference-aware Knowledge Selection for Knowledge-grounded Conversation Generation
In a multi-turn knowledge-grounded dialog, the difference between the knowledge selected at different turns usually provides potential clues to knowledge selection, which has been largely neglected in previous research. In this paper, we propose a difference-aware knowledge selection method. It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns. Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection. Automatic, human observational, and interactive evaluation shows that our method is able to select knowledge more accurately and generate more informative responses, significantly outperforming the state-of-the-art baselines. The codes are available at https://github.com/chujiezheng/DiffKS.
READ FULL TEXT
Comments
There are no comments yet.