Iterative Recursive Attention Model for Interpretable Sequence Classification

08/30/2018
by   Martin Tutek, et al.
0

Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.

READ FULL TEXT
research
07/15/2016

Neural Tree Indexers for Text Understanding

Recurrent neural networks (RNNs) process input text sequentially and mod...
research
12/05/2018

Attention Boosted Sequential Inference Model

Attention mechanism has been proven effective on natural language proces...
research
07/22/2016

Syntax-based Attention Model for Natural Language Inference

Introducing attentional mechanism in neural network is a powerful concep...
research
12/29/2021

Universal Transformer Hawkes Process with Adaptive Recursive Iteration

Asynchronous events sequences are widely distributed in the natural worl...
research
10/31/2022

QNet: A Quantum-native Sequence Encoder Architecture

This work investigates how current quantum computers can improve the per...
research
12/30/2022

On the Interpretability of Attention Networks

Attention mechanisms form a core component of several successful deep le...
research
06/01/2021

Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models

Due to their black-box and data-hungry nature, deep learning techniques ...

Please sign up or login with your details

Forgot password? Click here to reset