Multi-Task Learning for Argumentation Mining

04/11/2018
by   Claudia Schulz, et al.
0

We investigate whether and where multi-task learning (MTL) can improve performance on NLP problems related to argumentation mining (AM), in particular argument component identification. Our results show that MTL performs particularly well (and better than single-task learning) when little training data is available for the main task, a common scenario in AM. Our findings challenge previous assumptions that conceptualizations across AM datasets are divergent and that MTL is difficult for semantic or higher-level tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2018

Multi-Task Learning for Argumentation Mining in Low-Resource Settings

We investigate whether and where multi-task learning (MTL) can improve p...
research
04/20/2017

Neural End-to-End Learning for Computational Argumentation Mining

We investigate neural techniques for end-to-end computational argumentat...
research
11/09/2021

Variational Multi-Task Learning with Gumbel-Softmax Priors

Multi-task learning aims to explore task relatedness to improve individu...
research
09/14/2021

The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders

Multi-task learning with transformer encoders (MTL) has emerged as a pow...
research
04/05/2018

Jointly Detecting and Separating Singing Voice: A Multi-Task Approach

A main challenge in applying deep learning to music processing is the av...
research
07/03/2023

Multi-Task Learning Improves Performance In Deep Argument Mining Models

The successful analysis of argumentative techniques from user-generated ...
research
04/03/2017

Multi-Task Learning of Keyphrase Boundary Classification

Keyphrase boundary classification (KBC) is the task of detecting keyphra...

Please sign up or login with your details

Forgot password? Click here to reset