PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Deep Transformers for Patronizing and Condescending Language Detection

03/09/2022
by   Dou Hu, et al.
0

Patronizing and condescending language (PCL) has a large harmful impact and is difficult to detect, both for human judges and existing NLP systems. At SemEval-2022 Task 4, we propose a novel Transformer-based model and its ensembles to accurately understand such language context for PCL detection. To facilitate comprehension of the subtle and subjective nature of PCL, two fine-tuning strategies are applied to capture discriminative features from diverse linguistic behaviour and categorical distribution. The system achieves remarkable results on the official ranking, namely 1st in Subtask 1 and 5th in Subtask 2. Extensive experiments on the task demonstrate the effectiveness of our system and its strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2022

SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning

Pre-trained large language models can efficiently interpolate human-writ...
research
05/23/2022

Improving language models fine-tuning with representation consistency targets

Fine-tuning contextualized representations learned by pre-trained langua...
research
10/24/2018

Universal Language Model Fine-Tuning with Subword Tokenization for Polish

Universal Language Model for Fine-tuning [arXiv:1801.06146] (ULMFiT) is ...
research
10/13/2022

Predicting Fine-Tuning Performance with Probing

Large NLP models have recently shown impressive performance in language ...
research
09/14/2021

On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning

Recent work has shown evidence that the knowledge acquired by multilingu...
research
09/11/2021

Empirical Analysis of Training Strategies of Transformer-based Japanese Chit-chat Systems

In recent years, several high-performance conversational systems have be...
research
04/26/2021

Morph Call: Probing Morphosyntactic Content of Multilingual Transformers

The outstanding performance of transformer-based language models on a gr...

Please sign up or login with your details

Forgot password? Click here to reset