Fine-tuned Language Models for Text Classification

01/18/2018
by   Jeremy Howard, et al.
0

Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24 our pretrained models and code to enable adoption by the community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2021

WARP: Word-level Adversarial ReProgramming

Transfer learning from pretrained language models recently became the do...
research
02/27/2019

An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models

A growing number of state-of-the-art transfer learning methods employ la...
research
04/16/2021

Language Models are Few-Shot Butlers

Pretrained language models demonstrate strong performance in most NLP ta...
research
06/12/2022

DeepEmotex: Classifying Emotion in Text Messages using Deep Transfer Learning

Transfer learning has been widely used in natural language processing th...
research
09/15/2023

Large Language Models for Failure Mode Classification: An Investigation

In this paper we present the first investigation into the effectiveness ...
research
04/18/2021

Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity

When primed with only a handful of training samples, very large pretrain...
research
02/05/2023

FineDeb: A Debiasing Framework for Language Models

As language models are increasingly included in human-facing machine lea...

Please sign up or login with your details

Forgot password? Click here to reset