Transfer Learning of Lexical Semantic Families for Argumentative Discourse Units Identification

09/06/2022
by   João Rodrigues, et al.
0

Argument mining tasks require an informed range of low to high complexity linguistic phenomena and commonsense knowledge. Previous work has shown that pre-trained language models are highly effective at encoding syntactic and semantic linguistic phenomena when applied with transfer learning techniques and built on different pre-training objectives. It remains an issue of how much the existing pre-trained language models encompass the complexity of argument mining tasks. We rely on experimentation to shed light on how language models obtained from different lexical semantic families leverage the performance of the identification of argumentative discourse units task. Experimental results show that transfer learning techniques are beneficial to the task and that current methods may be insufficient to leverage commonsense knowledge from different lexical semantic families.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2021

Probing Linguistic Information For Logical Inference In Pre-trained Language Models

Progress in pre-trained language models has led to a surge of impressive...
research
06/21/2020

Labeling Explicit Discourse Relations using Pre-trained Language Models

Labeling explicit discourse relations is one of the most challenging sub...
research
08/15/2019

SenseBERT: Driving Some Sense into BERT

Self-supervision techniques have allowed neural language models to advan...
research
06/07/2023

Cross-Genre Argument Mining: Can Language Models Automatically Fill in Missing Discourse Markers?

Available corpora for Argument Mining differ along several axes, and one...
research
10/23/2020

GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method

Large pre-trained language models such as BERT have been the driving for...
research
11/03/2022

Overcoming Barriers to Skill Injection in Language Modeling: Case Study in Arithmetic

Through their transfer learning abilities, highly-parameterized large pr...
research
05/16/2021

How is BERT surprised? Layerwise detection of linguistic anomalies

Transformer language models have shown remarkable ability in detecting w...

Please sign up or login with your details

Forgot password? Click here to reset