Investigating how well contextual features are captured by bi-directional recurrent neural network models

09/03/2017
by   Kushal Chawla, et al.
0

Learning algorithms for natural language processing (NLP) tasks traditionally rely on manually defined relevant contextual features. On the other hand, neural network models using an only distributional representation of words have been successfully applied for several NLP tasks. Such models learn features automatically and avoid explicit feature engineering. Across several domains, neural models become a natural choice specifically when limited characteristics of data are known. However, this flexibility comes at the cost of interpretability. In this paper, we define three different methods to investigate ability of bi-directional recurrent neural networks (RNNs) in capturing contextual features. In particular, we analyze RNNs for sequence tagging tasks. We perform a comprehensive analysis on general as well as biomedical domain datasets. Our experiments focus on important contextual words as features, which can easily be extended to analyze various other feature types. We also investigate positional effects of context words and show how the developed methods can be used for error analysis.

READ FULL TEXT

page 6

page 7

research
11/14/2019

Contextual Recurrent Units for Cloze-style Reading Comprehension

Recurrent Neural Networks (RNN) are known as powerful models for handlin...
research
04/17/2020

How recurrent networks implement contextual processing in sentiment analysis

Neural networks have a remarkable capacity for contextual processing–usi...
research
12/24/2016

Understanding Neural Networks through Representation Erasure

While neural networks have been successfully applied to many natural lan...
research
08/24/2017

Combining Discrete and Neural Features for Sequence Labeling

Neural network models have recently received heated research attention i...
research
06/21/2022

NorBERT: NetwOrk Representations through BERT for Network Analysis and Management

Deep neural network models have been very successfully applied to Natura...
research
07/22/2019

Sparsity Emerges Naturally in Neural Language Models

Concerns about interpretability, computational resources, and principled...
research
10/09/2018

Learning Noun Cases Using Sequential Neural Networks

Morphological declension, which aims to inflect nouns to indicate number...

Please sign up or login with your details

Forgot password? Click here to reset