Token-level Fitting Issues of Seq2seq Models

05/08/2023
by   Guangsheng Bao, et al.
0

Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks. We find that seq2seq models trained with early-stopping suffer from issues at the token level. In particular, while some tokens in the vocabulary demonstrate overfitting, others underfit when training is stopped. Experiments show that the phenomena are pervasive in different models, even in fine-tuned large pretrained-models. We identify three major factors that influence token-level fitting, which include token frequency, parts-of-speech, and prediction discrepancy. Further, we find that external factors such as language, model size, domain, data scale, and pretraining can also influence the fitting of tokens.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2023

Representation Deficiency in Masked Language Modeling

Masked Language Modeling (MLM) has been one of the most prominent approa...
research
08/01/2021

Learning to Look Inside: Augmenting Token-Based Encoders with Character-Level Information

Commonly-used transformer language models depend on a tokenization schem...
research
01/31/2023

An Comparative Analysis of Different Pitch and Metrical Grid Encoding Methods in the Task of Sequential Music Generation

Pitch and meter are two fundamental music features for symbolic music ge...
research
10/09/2020

iobes: A Library for Span-Level Processing

Many tasks in natural language processing, such as named entity recognit...
research
05/17/2023

Incorporating Attribution Importance for Improving Faithfulness Metrics

Feature attribution methods (FAs) are popular approaches for providing i...
research
08/14/2023

Generating Individual Trajectories Using GPT-2 Trained from Scratch on Encoded Spatiotemporal Data

Following Mizuno, Fujimoto, and Ishikawa's research (Front. Phys. 2022),...
research
04/04/2020

Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models

This paper presents an empirical study of conversational question reform...

Please sign up or login with your details

Forgot password? Click here to reset