Toward Mention Detection Robustness with Recurrent Neural Networks

02/24/2016
by   Thien Huu Nguyen, et al.
0

One of the key challenges in natural language processing (NLP) is to yield good performance across application domains and languages. In this work, we investigate the robustness of the mention detection systems, one of the fundamental tasks in information extraction, via recurrent neural networks (RNNs). The advantage of RNNs over the traditional approaches is their capacity to capture long ranges of context and implicitly adapt the word embeddings, trained on a large corpus, into a task-specific word representation, but still preserve the original semantic generalization to be helpful across domains. Our systematic evaluation for RNN architectures demonstrates that RNNs not only outperform the best reported systems (up to 9% relative error reduction) in the general setting but also achieve the state-of-the-art performance in the cross-domain setting for English. Regarding other languages, RNNs are significantly better than the traditional methods on the similar task of named entity recognition for Dutch (up to 22% relative error reduction).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2017

Gradual Learning of Deep Recurrent Neural Networks

Deep Recurrent Neural Networks (RNNs) achieve state-of-the-art results i...
research
08/29/2018

Neural Cross-Lingual Named Entity Recognition with Minimal Resources

For languages with no annotated resources, unsupervised transfer of natu...
research
01/23/2013

Regularization and nonlinearities for neural language models: when are they needed?

Neural language models (LMs) based on recurrent neural networks (RNN) ar...
research
11/15/2017

Recurrent Neural Networks as Weighted Language Recognizers

We investigate computational complexity of questions of various problems...
research
05/31/2015

Recurrent Neural Networks with External Memory for Language Understanding

Recurrent Neural Networks (RNNs) have become increasingly popular for th...
research
10/22/2020

On the Effects of Using word2vec Representations in Neural Networks for Dialogue Act Recognition

Dialogue act recognition is an important component of a large number of ...
research
12/16/2021

Explainable Natural Language Processing with Matrix Product States

Despite empirical successes of recurrent neural networks (RNNs) in natur...

Please sign up or login with your details

Forgot password? Click here to reset