EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

10/16/2021
by   Frederick Liu, et al.
0

Encoder-decoder transformer architectures have become popular recently with the advent of T5 models. It is also more favorable over architectures like BERT for pre-training on language model task when it comes to large scale models which could take months to train given it's generality. While being able to generalize to more tasks, it is not evident if the proposed encoder-decoder architecture is the most efficient for fine-tuning on classification and regression tasks given the pre-trained model. In this work, we study fine-tuning pre-trained encoder-decoder models such as T5. Particularly, we propose EncT5 as a way to efficiently fine-tune pre-trained encoder-decoder T5 models for classification and regression tasks by using the encoder layers. Our experimental results show that EncT5 with less than half of the parameters of T5 performs similarly to T5 models on GLUE benchmark. We believe our proposed approach can be easily applied to any pre-trained encoder-decoder model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2019

Sample Efficient Text Summarization Using a Single Pre-Trained Transformer

Language model (LM) pre-training has resulted in impressive performance ...
research
05/09/2023

An Exploration of Encoder-Decoder Approaches to Multi-Label Classification for Legal and Biomedical Text

Standard methods for multi-label text classification largely rely on enc...
research
04/17/2023

Typos-aware Bottlenecked Pre-Training for Robust Dense Retrieval

Current dense retrievers (DRs) are limited in their ability to effective...
research
09/05/2023

nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style Models with Limited Resources

State-of-the-art language models like T5 have revolutionized the NLP lan...
research
04/07/2022

Parameter-Efficient Abstractive Question Answering over Tables or Text

A long-term ambition of information seeking QA systems is to reason over...
research
12/01/2020

Pre-Trained Image Processing Transformer

As the computing power of modern hardware is increasing strongly, pre-tr...
research
10/10/2022

Leveraging Key Information Modeling to Improve Less-Data Constrained News Headline Generation via Duality Fine-Tuning

Recent language generative models are mostly trained on large-scale data...

Please sign up or login with your details

Forgot password? Click here to reset