Towards Simple and Efficient Task-Adaptive Pre-training for Text Classification

09/26/2022
by   Arnav Ladkat, et al.
0

Language models are pre-trained using large corpora of generic data like book corpus, common crawl and Wikipedia, which is essential for the model to understand the linguistic characteristics of the language. New studies suggest using Domain Adaptive Pre-training (DAPT) and Task-Adaptive Pre-training (TAPT) as an intermediate step before the final finetuning task. This step helps cover the target domain vocabulary and improves the model performance on the downstream task. In this work, we study the impact of training only the embedding layer on the model's performance during TAPT and task-specific finetuning. Based on our study, we propose a simple approach to make the intermediate step of TAPT for BERT-based models more efficient by performing selective pre-training of BERT layers. We show that training only the BERT embedding layer during TAPT is sufficient to adapt to the vocabulary of the target domain and achieve comparable performance. Our approach is computationally efficient, with 78% fewer parameters trained during TAPT. The proposed embedding layer finetuning approach can also be an efficient domain adaptation technique.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2021

Task-adaptive Pre-training of Language Models with Word Embedding Regularization

Pre-trained language models (PTLMs) acquire domain-independent linguisti...
research
10/08/2020

On the importance of pre-training data volume for compact language models

Recent advances in language modeling have led to computationally intensi...
research
07/14/2023

Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords

We propose a novel task-agnostic in-domain pre-training method that sits...
research
10/06/2020

Neural Mask Generator: Learning to Generate Adaptive Word Maskings for Language Model Adaptation

We propose a method to automatically generate a domain- and task-adaptiv...
research
04/28/2022

Resource-efficient domain adaptive pre-training for medical images

The deep learning-based analysis of medical images suffers from data sca...
research
11/14/2016

Post Training in Deep Learning with Last Kernel

One of the main challenges of deep learning methods is the choice of an ...
research
07/12/2023

FDAPT: Federated Domain-adaptive Pre-training for Language Models

Combining Domain-adaptive Pre-training (DAPT) with Federated Learning (F...

Please sign up or login with your details

Forgot password? Click here to reset