Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding

08/16/2019
by   Jean-Baptiste Remy, et al.
0

The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2019

Selective Attention for Context-aware Neural Machine Translation

Despite the progress made in sentence-level NMT, current systems still f...
research
05/16/2019

HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

Neural extractive summarization models usually employ a hierarchical enc...
research
03/31/2021

Divide and Rule: Training Context-Aware Multi-Encoder Translation Models with Little Resources

Multi-encoder models are a broad family of context-aware Neural Machine ...
research
05/15/2019

When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion

Though machine translation errors caused by the lack of context beyond o...
research
10/24/2022

Focused Concatenation for Context-Aware Neural Machine Translation

A straightforward approach to context-aware neural machine translation c...
research
05/10/2023

Context-Aware Document Simplification

To date, most work on text simplification has focused on sentence-level ...
research
04/08/2020

Pruning and Sparsemax Methods for Hierarchical Attention Networks

This paper introduces and evaluates two novel Hierarchical Attention Net...

Please sign up or login with your details

Forgot password? Click here to reset