UoB at SemEval-2021 Task 5: Extending Pre-Trained Language Models to Include Task and Domain-Specific Information for Toxic Span Prediction

10/07/2021
by   Erik Yan, et al.
0

Toxicity is pervasive in social media and poses a major threat to the health of online communities. The recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in which we approach natural language processing. However, the inherent nature of pre-training means that they are unlikely to capture task-specific statistical information or learn domain-specific knowledge. Additionally, most implementations of these models typically do not employ conditional random fields, a method for simultaneous token classification. We show that these modifications can improve model performance on the Toxic Spans Detection task at SemEval-2021 to achieve a score within 4 percentage points of the top performing team.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Pre-trained Language Models (PrLMs) have been widely used as backbones i...
research
05/05/2023

Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios

In recent years, major advancements in natural language processing (NLP)...
research
04/14/2021

K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce

Existing pre-trained language models (PLMs) have demonstrated the effect...
research
09/22/2022

Scope of Pre-trained Language Models for Detecting Conflicting Health Information

An increasing number of people now rely on online platforms to meet thei...
research
10/23/2020

TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification

The experimental landscape in natural language processing for social med...
research
06/25/2023

Revolutionizing Cyber Threat Detection with Large Language Models

Natural Language Processing (NLP) domain is experiencing a revolution du...
research
07/08/2022

Getting BART to Ride the Idiomatic Train: Learning to Represent Idiomatic Expressions

Idiomatic expressions (IEs), characterized by their non-compositionality...

Please sign up or login with your details

Forgot password? Click here to reset