Massive Choice, Ample Tasks (MaChAmp):A Toolkit for Multi-task Learning in NLP

05/29/2020
by   Rob van der Goot, et al.
0

Transfer learning, particularly approaches that combine multi-task learning with pre-trained contextualized embeddings and fine-tuning, have advanced the field of Natural Language Processing tremendously in recent years. In this paper we present MaChAmp, a toolkit for easy use of fine-tuning BERT-like models in multi-task settings. The benefits of MaChAmp are its flexible configuration options, and the support of a variety of NLP tasks in a uniform toolkit, from text classification to sequence labeling and dependency parsing.

READ FULL TEXT
research
10/25/2019

FineText: Text Classification via Attention-based Language Model Fine-tuning

Training deep neural networks from scratch on natural language processin...
research
05/17/2022

When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning

Transfer learning (TL) in natural language processing (NLP) has seen a s...
research
05/24/2023

Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation

In this paper, we introduce Ranger - a toolkit to facilitate the easy us...
research
02/21/2023

Device Tuning for Multi-Task Large Model

Unsupervised pre-training approaches have achieved great success in many...
research
07/22/2020

Multi-task learning for natural language processing in the 2020s: where are we going?

Multi-task learning (MTL) significantly pre-dates the deep learning era,...
research
05/28/2021

Weighted Training for Cross-Task Learning

In this paper, we introduce Target-Aware Weighted Training (TAWT), a wei...
research
04/19/2022

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Learning visual representations from natural language supervision has re...

Please sign up or login with your details

Forgot password? Click here to reset