Complex Structure Leads to Overfitting: A Structure Regularization Decoding Method for Natural Language Processing

11/25/2017
by   Xu Sun, et al.
0

Recent systems on structured prediction focus on increasing the level of structural dependencies within the model. However, our study suggests that complex structures entail high overfitting risks. To control the structure-based overfitting, we propose to conduct structure regularization decoding (SR decoding). The decoding of the complex structure model is regularized by the additionally trained simple structure model. We theoretically analyze the quantitative relations between the structural complexity and the overfitting risk. The analysis shows that complex structure models are prone to the structure-based overfitting. Empirical evaluations show that the proposed method improves the performance of the complex structure models by reducing the structure-based overfitting. On the sequence labeling tasks, the proposed method substantially improves the performance of the complex neural network models. The maximum F1 error rate reduction is 36.4 the third-order model. The proposed method also works for the parsing task. The maximum UAS improvement is 5.5 competitive with or better than the state-of-the-art results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2017

Does Higher Order LSTM Have Better Accuracy in Chunking and Named Entity Recognition?

Current researches usually employ single order setting by default when d...
research
03/15/2018

Structure Regularized Neural Network for Entity Relation Classification for Chinese Literature Text

Relation classification is an important semantic processing task in the ...
research
05/27/2019

Learning by stochastic serializations

Complex structures are typical in machine learning. Tailoring learning a...
research
10/18/2022

Less is More: Simplifying Feature Extractors Prevents Overfitting for Neural Discourse Parsing Models

Complex feature extractors are widely employed for text representation b...
research
04/14/2019

Analysis of overfitting in the regularized Cox model

The Cox proportional hazards model is ubiquitous in the analysis of time...
research
03/29/2023

Maximum likelihood method revisited: Gauge symmetry in Kullback – Leibler divergence and performance-guaranteed regularization

The maximum likelihood method is the best-known method for estimating th...
research
05/26/2021

Joint Optimization of Tokenization and Downstream Model

Since traditional tokenizers are isolated from a downstream task and mod...

Please sign up or login with your details

Forgot password? Click here to reset