Language Modelling as a Multi-Task Problem

01/27/2021
by   Lucas Weber, et al.
0

In this paper, we propose to study language modelling as a multi-task problem, bringing together three strands of research: multi-task learning, linguistics, and interpretability. Based on hypotheses derived from linguistic theory, we investigate whether language models adhere to learning principles of multi-task learning during training. To showcase the idea, we analyse the generalisation behaviour of language models as they learn the linguistic concept of Negative Polarity Items (NPIs). Our experiments demonstrate that a multi-task setting naturally emerges within the objective of the more general task of language modelling.We argue that this insight is valuable for multi-task learning, linguistics and interpretability research and can lead to exciting new findings in all three domains.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

page 9

page 10

research
11/13/2020

Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning

We present a multi-task learning framework to enable the training of one...
research
05/28/2020

Joint Modelling of Emotion and Abusive Language Detection

The rise of online communication platforms has been accompanied by some ...
research
11/02/2020

Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information

A rapidly growing amount of content posted online, such as food recipes,...
research
05/10/2023

iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?

There are two competing approaches for modelling annotator disagreement:...
research
10/16/2020

Multi-task Learning of Negation and Speculation for Targeted Sentiment Classification

The majority of work in targeted sentiment analysis has concentrated on ...
research
09/11/2020

Towards Interpretable Multi-Task Learning Using Bilevel Programming

Interpretable Multi-Task Learning can be expressed as learning a sparse ...
research
03/16/2020

Interpretable MTL from Heterogeneous Domains using Boosted Tree

Multi-task learning (MTL) aims at improving the generalization performan...

Please sign up or login with your details

Forgot password? Click here to reset