Adaptive Fine-Tuning of Transformer-Based Language Models for Named Entity Recognition

02/05/2022
by   Felix Stollenwerk, et al.
0

The current standard approach for fine-tuning transformer-based language models includes a fixed number of training epochs and a linear learning rate schedule. In order to obtain a near-optimal model for the given downstream task, a search in optimization hyperparameter space is usually required. In particular, the number of training epochs needs to be adjusted to the dataset size. In this paper, we introduce adaptive fine-tuning, which is an alternative approach that uses early stopping and a custom learning rate schedule to dynamically adjust the number of training epochs to the dataset size. For the example use case of named entity recognition, we show that our approach not only makes hyperparameter search with respect to the number of training epochs redundant, but also leads to improved results in terms of performance, stability and efficiency. This holds true especially for small datasets, where we outperform the state-of-the-art fine-tuning method by a large margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2023

Rethinking Learning Rate Tuning in the Era of Large Language Models

Large Language Models (LLMs) represent the recent success of deep learni...
research
11/13/2020

FLERT: Document-Level Features for Named Entity Recognition

Current state-of-the-art approaches for named entity recognition (NER) u...
research
11/24/2021

Few-shot Named Entity Recognition with Cloze Questions

Despite the huge and continuous advances in computational linguistics, t...
research
07/22/2021

Target-Oriented Fine-tuning for Zero-Resource Named Entity Recognition

Zero-resource named entity recognition (NER) severely suffers from data ...
research
03/11/2022

Staged Training for Transformer Language Models

The current standard approach to scaling transformer language models tra...
research
05/28/2021

Weighted Training for Cross-Task Learning

In this paper, we introduce Target-Aware Weighted Training (TAWT), a wei...
research
07/05/2023

Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning

In this paper, we introduce the Instruction Following Score (IFS), a met...

Please sign up or login with your details

Forgot password? Click here to reset