Combining Discrete and Neural Features for Sequence Labeling

08/24/2017
by   Jie Yang, et al.
0

Neural network models have recently received heated research attention in the natural language processing community. Compared with traditional models with discrete features, neural models have two main advantages. First, they take low-dimensional, real-valued embedding vectors as inputs, which can be trained over large raw data, thereby addressing the issue of feature sparsity in discrete models. Second, deep neural networks can be used to automatically combine input features, and including non-local features that capture semantic patterns that cannot be expressed using discrete indicator features. As a result, neural network models have achieved competitive accuracies compared with the best discrete models for a range of NLP tasks. On the other hand, manual feature templates have been carefully investigated for most NLP tasks over decades and typically cover the most useful indicator pattern for solving the problems. Such information can be complementary the features automatically induced from neural networks, and therefore combining discrete and neural features can potentially lead to better accuracy compared with models that leverage discrete or neural features only. In this paper, we systematically investigate the effect of discrete and neural feature combination for a range of fundamental NLP tasks based on sequence labeling, including word segmentation, POS tagging and named entity recognition for Chinese and English, respectively. Our results on standard benchmarks show that state-of-the-art neural models can give accuracies comparable to the best discrete models in the literature for most tasks and combing discrete and neural features unanimously yield better results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2016

A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging

Recently, neural network models for natural language processing tasks ha...
research
09/03/2017

Investigating how well contextual features are captured by bi-directional recurrent neural network models

Learning algorithms for natural language processing (NLP) tasks traditio...
research
11/13/2018

Few-shot Learning for Named Entity Recognition in Medical Text

Deep neural network models have recently achieved state-of-the-art perfo...
research
04/13/2018

Incorporating Dictionaries into Deep Neural Networks for the Chinese Clinical Named Entity Recognition

Clinical Named Entity Recognition (CNER) aims to identify and classify c...
research
08/09/2018

Building a Kannada POS Tagger Using Machine Learning and Neural Network Models

POS Tagging serves as a preliminary task for many NLP applications. Kann...
research
04/19/2016

Exploring Segment Representations for Neural Segmentation Models

Many natural language processing (NLP) tasks can be generalized into seg...
research
10/25/2018

Tackling Sequence to Sequence Mapping Problems with Neural Networks

In Natural Language Processing (NLP), it is important to detect the rela...

Please sign up or login with your details

Forgot password? Click here to reset