UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition Extraction

09/11/2020
by   Andrei-Marius Avram, et al.
0

This work presents our contribution in the context of the 6th task of SemEval-2020: Extracting Definitions from Free Text in Textbooks (DeftEval). This competition consists of three subtasks with different levels of granularity: (1) classification of sentences as definitional or non-definitional,(2) labeling of definitional sentences, and (3) relation classification. We use various pretrained language models (i.e., BERT, XLNet, RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the competition. Specifically, for each language model variant, we experiment by both freezing its weights and fine-tuning them. We also explore a multi-task architecture that was trained to jointly predict the outputs for the second and the third subtasks. Our best performing model evaluated on the DeftEval dataset obtains the 32nd place for the first subtask and the 37th place for the second subtask. The code is available for further research at: https://github.com/avramandrei/DeftEval.

READ FULL TEXT
research
08/22/2019

Text Summarization with Pretrained Encoders

Bidirectional Encoder Representations from Transformers (BERT) represent...
research
07/12/2022

PLM-ICD: Automatic ICD Coding with Pretrained Language Models

Automatically classifying electronic health records (EHRs) into diagnost...
research
08/26/2021

A Computational Approach to Measure Empathy and Theory-of-Mind from Written Texts

Theory-of-mind (ToM), a human ability to infer the intentions and though...
research
02/15/2022

BLUE at Memotion 2.0 2022: You have my Image, my Text and my Transformer

Memes are prevalent on the internet and continue to grow and evolve alon...
research
05/10/2023

Enriching language models with graph-based context information to better understand textual data

A considerable number of texts encountered daily are somehow connected w...
research
06/17/2021

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference

Natural language inference (NLI) is formulated as a unified framework fo...
research
07/08/2021

Feature Pyramid Network for Multi-task Affective Analysis

Affective Analysis is not a single task, and the valence-arousal value, ...

Please sign up or login with your details

Forgot password? Click here to reset