Gradual Fine-Tuning for Low-Resource Domain Adaptation

03/03/2021
by   Haoran Xu, et al.
0

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such domain adaptation is typically done using one stage of fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process can yield substantial further gains and can be applied without modifying the model or learning objective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

On the Complementarity of Data Selection and Fine Tuning for Domain Adaptation

Domain adaptation of neural networks commonly relies on three training p...
research
07/06/2023

Efficient Domain Adaptation of Sentence Embeddings using Adapters

Sentence embeddings enable us to capture the semantic similarity of shor...
research
08/17/2020

First U-Net Layers Contain More Domain Specific Information Than The Last Ones

MRI scans appearance significantly depends on scanning protocols and, co...
research
08/30/2022

Super-model ecosystem: A domain-adaptation perspective

This paper attempts to establish the theoretical foundation for the emer...
research
06/01/2023

Automatic Data Augmentation for Domain Adapted Fine-Tuning of Self-Supervised Speech Representations

Self-Supervised Learning (SSL) has allowed leveraging large amounts of u...
research
08/08/2023

Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models

Major advances in Machine Learning (ML) and Artificial Intelligence (AI)...

Please sign up or login with your details

Forgot password? Click here to reset