DeepAI AI Chat
Log In Sign Up

End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

by   Xuezhe Ma, et al.
Carnegie Mellon University

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55% accuracy for POS tagging and 91.21% F1 for NER.


page 1

page 2

page 3

page 4


Neural sequence labeling for Vietnamese POS Tagging and NER

This paper presents a neural architecture for Vietnamese sequence labeli...

Character-Level Feature Extraction with Densely Connected Networks

Generating character-level features is an important step for achieving g...

Sequence Labeling: A Practical Approach

We take a practical approach to solving sequence labeling problem assumi...

Learning Task-specific Representation for Novel Words in Sequence Labeling

Word representation is a key component in neural-network-based sequence ...

Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks

Selecting optimal parameters for a neural network architecture can often...

A Global Context Mechanism for Sequence Labeling

Sequential labeling tasks necessitate the computation of sentence repres...

IPOD: Corpus of 190,000 Industrial Occupations

Job titles are the most fundamental building blocks for occupational dat...

Code Repositories


Bi-LSTM for POS Tagging

view repo