AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue

12/04/2019
by   Gaurav Kumar, et al.
0

The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research. Current architectures only take care of semantic and contextual information for a given query and fail to completely account for syntactic and external knowledge which are crucial for generating responses in a chit-chat system. To overcome this problem, we propose an end to end multi-stream deep learning architecture which learns unified embeddings for query-response pairs by leveraging contextual information from memory networks and syntactic information by incorporating Graph Convolution Networks (GCN) over their dependency parse. A stream of this network also utilizes transfer learning by pre-training a bidirectional transformer to extract semantic representation for each input sentence and incorporates external knowledge through the the neighborhood of the entities from a Knowledge Base (KB). We benchmark these embeddings on next sentence prediction task and significantly improve upon the existing techniques. Furthermore, we use AMUSED to represent query and responses along with its context to develop a retrieval based conversational agent which has been validated by expert linguists to have comprehensive engagement with humans.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

UniConv: A Unified Conversational Neural Architecture for Multi-domain Task-oriented Dialogues

Building an end-to-end conversational agent for multi-domain task-orient...
research
03/31/2023

FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems

Response ranking in dialogues plays a crucial role in retrieval-based co...
research
09/10/2018

Improving Response Selection in Multi-turn Dialogue Systems

Building systems that can communicate with humans is a core problem in A...
research
09/12/2021

Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation

Although pre-training models have achieved great success in dialogue gen...
research
09/10/2023

RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution

Although syntactic information is beneficial for many NLP tasks, combini...
research
09/29/2022

ConceptNet infused DialoGPT for Underlying Commonsense Understanding and Reasoning in Dialogue Response Generation

The pre-trained conversational models still fail to capture the implicit...
research
11/19/2019

Aging Memories Generate More Fluent Dialogue Responses with Memory Networks

The integration of a Knowledge Base (KB) into a neural dialogue agent is...

Please sign up or login with your details

Forgot password? Click here to reset