Will It Blend? Mixing Training Paradigms Prompting for Argument Quality Prediction

09/19/2022
by   Michiel van der Meer, et al.
0

This paper describes our contributions to the Shared Task of the 9th Workshop on Argument Mining (2022). Our approach uses Large Language Models for the task of Argument Quality Prediction. We perform prompt engineering using GPT-3, and also investigate the training paradigms multi-task learning, contrastive learning, and intermediate-task training. We find that a mixed prediction setup outperforms single models. Prompting GPT-3 works best for predicting argument validity, and argument novelty is best estimated by a model trained using all three training paradigms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2019

Multi-task Learning for Japanese Predicate Argument Structure Analysis

An event-noun is a noun that has an argument structure similar to a pred...
research
10/20/2021

Overview of the 2021 Key Point Analysis Shared Task

We describe the 2021 Key Point Analysis (KPA-2021) shared task on key po...
research
04/11/2018

Multi-Task Learning for Argumentation Mining in Low-Resource Settings

We investigate whether and where multi-task learning (MTL) can improve p...
research
09/30/2021

Key Point Analysis via Contrastive Learning and Extractive Argument Summarization

Key point analysis is the task of extracting a set of concise and high-l...
research
07/07/2017

Computational Models of Tutor Feedback in Language Acquisition

This paper investigates the role of tutor feedback in language learning ...
research
12/16/2016

A Two-Phase Approach Towards Identifying Argument Structure in Natural Language

We propose a new approach for extracting argument structure from natural...
research
02/24/2021

Multi-Task Attentive Residual Networks for Argument Mining

We explore the use of residual networks and neural attention for argumen...

Please sign up or login with your details

Forgot password? Click here to reset