An Investigation of Potential Function Designs for Neural CRF

11/11/2020
by   Zechuan Hu, et al.
16

The neural linear-chain CRF model is one of the most widely-used approach to sequence labeling. In this paper, we investigate a series of increasingly expressive potential functions for neural CRF models, which not only integrate the emission and transition functions, but also explicitly take the representations of the contextual words as input. Our extensive experiments show that the decomposed quadrilinear potential function based on the vector representations of two neighboring labels and two neighboring words consistently achieves the best performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

Locally-Contextual Nonlinear CRFs for Sequence Labeling

Linear chain conditional random fields (CRFs) combined with contextual w...
research
05/21/2017

Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding

This paper presents a novel deep learning architecture to classify struc...
research
09/17/2020

Fast and Accurate Sequence Labeling with Approximate Inference Network

The linear-chain Conditional Random Field (CRF) model is one of the most...
research
08/23/2019

Hierarchically-Refined Label Attention Network for Sequence Labeling

CRF has been used as a powerful model for statistical sequence labeling....
research
07/13/2015

Neural CRF Parsing

This paper describes a parsing model that combines the exact dynamic pro...
research
06/14/2018

NCRF++: An Open-source Neural Sequence Labeling Toolkit

This paper describes NCRF++, a toolkit for neural sequence labeling. NCR...
research
08/02/2017

Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling

In textual information extraction and other sequence labeling tasks it i...

Please sign up or login with your details

Forgot password? Click here to reset