Improving Conditioning in Context-Aware Sequence to Sequence Models

11/21/2019
by   Xinyi Wang, et al.
0

Neural sequence to sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence. In this work, we focus on cases where generation is conditioned on both a short query and a long context, such as abstractive question answering or document-level translation. We modify the standard sequence-to-sequence approach to make better use of both the query and the context by expanding the conditioning mechanism to intertwine query and context attention. We also introduce a simple and efficient data augmentation method for the proposed model. Experiments on three different tasks show that both changes lead to consistent improvements.

READ FULL TEXT
research
06/25/2020

Sequence to Multi-Sequence Learning via Conditional Chain Mapping for Mixture Signals

Neural sequence-to-sequence models are well established for applications...
research
05/23/2019

Copy this Sentence

Attention is an operation that selects some largest element from some se...
research
04/25/2018

Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

Neural Sequence-to-Sequence models have proven to be accurate and robust...
research
10/18/2019

Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs

Query-based open-domain NLP tasks require information synthesis from lon...
research
12/25/2018

Sequence to Sequence Learning for Query Expansion

Using sequence to sequence algorithms for query expansion has not been e...
research
10/16/2019

Imperial College London Submission to VATEX Video Captioning Task

This paper describes the Imperial College London team's submission to th...
research
11/28/2017

Plan, Attend, Generate: Planning for Sequence-to-Sequence Models

We investigate the integration of a planning mechanism into sequence-to-...

Please sign up or login with your details

Forgot password? Click here to reset