Identifying beneficial task relations for multi-task learning in deep neural networks

02/27/2017
by   Joachim Bingel, et al.
0

Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2019

Sentiment and Sarcasm Classification with Multitask Learning

Sentiment classification and sarcasm detection are both important NLP ta...
research
08/16/2019

An Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing

Multi-Task Learning (MTL) aims at boosting the overall performance of ea...
research
07/26/2018

Concurrent Learning of Semantic Relations

Discovering whether words are semantically related and identifying the s...
research
03/25/2021

Copolymer Informatics with Multi-Task Deep Neural Networks

Polymer informatics tools have been recently gaining ground to efficient...
research
09/16/2017

Deep Automated Multi-task Learning

Multi-task learning (MTL) has recently contributed to learning better re...
research
05/23/2017

Sluice networks: Learning what to share between loosely related tasks

Multi-task learning is partly motivated by the observation that humans b...
research
12/16/2021

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

Retrieval-augmented generation models have shown state-of-the-art perfor...

Please sign up or login with your details

Forgot password? Click here to reset