Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5

10/31/2022
by   Irina Bigoulaeva, et al.
0

We compare sequential fine-tuning with a model for multi-task learning in the context where we are interested in boosting performance on two tasks, one of which depends on the other. We test these models on the FigLang2022 shared task which requires participants to predict language inference labels on figurative language along with corresponding textual explanations of the inference predictions. Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting. Our findings show that simple sequential fine-tuning of text-to-text models is an extraordinarily powerful method for cross-task knowledge transfer while simultaneously predicting multiple interdependent targets. So much so, that our best model achieved the (tied) highest score on the task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2020

Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters Less Data

Multi-Task Learning (MTL) has emerged as a promising approach for transf...
research
05/01/2020

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

Current approaches to solving classification tasks in NLP involve fine-t...
research
04/27/2023

π-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation

Foundation models have achieved great advances in multi-task learning wi...
research
07/29/2021

Multi-Task Learning in Utterance-Level and Segmental-Level Spoof Detection

In this paper, we provide a series of multi-tasking benchmarks for simul...
research
09/14/2021

Rationales for Sequential Predictions

Sequence models are a critical component of modern NLP systems, but thei...
research
09/21/2022

Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers

Text-to-text transformers have shown remarkable success in the task of m...
research
04/12/2017

Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning

Although neural networks are well suited for sequential transfer learnin...

Please sign up or login with your details

Forgot password? Click here to reset