Learning from Auxiliary Sources in Argumentative Revision Classification

09/13/2023
by   Tazin Afrin, et al.
0

We develop models to classify desirable reasoning revisions in argumentative writing. We explore two approaches – multi-task learning and transfer learning – to take advantage of auxiliary sources of revision data for similar tasks. Results of intrinsic and extrinsic evaluations show that both approaches can indeed improve classifier performance over baselines. While multi-task learning shows that training on different sources of data at the same time may improve performance, transfer-learning better represents the relationship between the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2020

A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning

Multi-task learning (MTL) optimizes several learning tasks simultaneousl...
research
12/02/2021

Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors

Conversational analysis systems are trained using noisy human labels and...
research
05/02/2020

Understanding and Improving Information Transfer in Multi-Task Learning

We investigate multi-task learning approaches that use a shared feature ...
research
09/18/2018

Transfer and Multi-Task Learning for Noun-Noun Compound Interpretation

In this paper, we empirically evaluate the utility of transfer and multi...
research
09/30/2022

Unsupervised Multi-task and Transfer Learning on Gaussian Mixture Models

Unsupervised learning has been widely used in many real-world applicatio...
research
08/05/2020

MultiCheXNet: A Multi-Task Learning Deep Network For Pneumonia-like Diseases Diagnosis From X-ray Scans

We present MultiCheXNet, an end-to-end Multi-task learning model, that i...
research
05/26/2020

Visual Interest Prediction with Attentive Multi-Task Transfer Learning

Visual interest affect prediction is a very interesting area of rese...

Please sign up or login with your details

Forgot password? Click here to reset