MIPT-NSU-UTMN at SemEval-2021 Task 5: Ensembling Learning with Pre-trained Language Models for Toxic Spans Detection

04/10/2021
by   Mikhail Kotyushev, et al.
0

This paper describes our system for SemEval-2021 Task 5 on Toxic Spans Detection. We developed ensemble models using BERT-based neural architectures and post-processing to combine tokens into spans. We evaluated several pre-trained language models using various ensemble techniques for toxic span identification and achieved sizable improvements over our baseline fine-tuned BERT models. Finally, our system obtained a F1-score of 67.55

READ FULL TEXT
research
08/06/2020

aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF, and Transfer Learning

We describe our system for SemEval-2020 Task 11 on Detection of Propagan...
research
04/04/2021

ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction

This paper describes our system for Task 4 of SemEval-2021: Reading Comp...
research
05/03/2022

Predicting Issue Types with seBERT

Pre-trained transformer models are the current state-of-the-art for natu...
research
11/13/2022

Xu at SemEval-2022 Task 4: Pre-BERT Neural Network Methods vs Post-BERT RoBERTa Approach for Patronizing and Condescending Language Detection

This paper describes my participation in the SemEval-2022 Task 4: Patron...
research
04/08/2021

Lone Pine at SemEval-2021 Task 5: Fine-Grained Detection of Hate Speech Using BERToxic

This paper describes our approach to the Toxic Spans Detection problem (...
research
09/07/2021

FH-SWF SG at GermEval 2021: Using Transformer-Based Language Models to Identify Toxic, Engaging, Fact-Claiming Comments

In this paper we describe the methods we used for our submissions to the...
research
04/27/2017

Duluth at SemEval-2017 Task 6: Language Models in Humor Detection

This paper describes the Duluth system that participated in SemEval-2017...

Please sign up or login with your details

Forgot password? Click here to reset