DeepAI AI Chat
Log In Sign Up

Attentive Memory Networks: Efficient Machine Reading for Conversational Search

by   Tom Kenter, et al.
University of Amsterdam

Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-the-art models, while using considerably fewer computations.


Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

Although neural conversation models are effective in learning how to pro...

BERT Embeddings Can Track Context in Conversational Search

The use of conversational assistants to search for information is becomi...

Intelligent Conversational Android ERICA Applied to Attentive Listening and Job Interview

Following the success of spoken dialogue systems (SDS) in smartphone ass...

Keyword Extraction for Improved Document Retrieval in Conversational Search

Recent research has shown that mixed-initiative conversational search, b...

Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search

Asking clarifying questions in response to ambiguous or faceted queries ...

EMT: Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine Reading

The goal of conversational machine reading is to answer user questions g...

On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?

Knowledge-grounded conversational models are known to suffer from produc...