cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora

06/04/2021
by   Abhilash Nandy, et al.
0

This paper describes the performance of the team cs60075_team2 at SemEval 2021 Task 1 - Lexical Complexity Prediction. The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2020

Galileo at SemEval-2020 Task 12: Multi-lingual Learning for Offensive Language Identification using Pre-trained Language Models

This paper describes Galileo's performance in SemEval-2020 Task 12 on de...
research
04/14/2021

UPB at SemEval-2021 Task 1: Combining Deep Learning and Hand-Crafted Features for Lexical Complexity Prediction

Reading is a complex process which requires proper understanding of text...
research
04/20/2021

Modeling Event Plausibility with Consistent Conceptual Abstraction

Understanding natural language requires common sense, one aspect of whic...
research
05/13/2022

PathologyBERT – Pre-trained Vs. A New Transformer Language Model for Pathology Domain

Pathology text mining is a challenging task given the reporting variabil...
research
11/10/2022

Probabilistic thermal stability prediction through sparsity promoting transformer representation

Pre-trained protein language models have demonstrated significant applic...
research
04/02/2021

IITK@LCP at SemEval 2021 Task 1: Classification for Lexical Complexity Regression Task

This paper describes our contribution to SemEval 2021 Task 1: Lexical Co...
research
05/23/2022

Use of Transformer-Based Models for Word-Level Transliteration of the Book of the Dean of Lismore

The Book of the Dean of Lismore (BDL) is a 16th-century Scottish Gaelic ...

Please sign up or login with your details

Forgot password? Click here to reset