UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition Extraction

09/11/2020
by   Andrei-Marius Avram, et al.
0

This work presents our contribution in the context of the 6th task of SemEval-2020: Extracting Definitions from Free Text in Textbooks (DeftEval). This competition consists of three subtasks with different levels of granularity: (1) classification of sentences as definitional or non-definitional,(2) labeling of definitional sentences, and (3) relation classification. We use various pretrained language models (i.e., BERT, XLNet, RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the competition. Specifically, for each language model variant, we experiment by both freezing its weights and fine-tuning them. We also explore a multi-task architecture that was trained to jointly predict the outputs for the second and the third subtasks. Our best performing model evaluated on the DeftEval dataset obtains the 32nd place for the first subtask and the 37th place for the second subtask. The code is available for further research at: https://github.com/avramandrei/DeftEval.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

08/22/2019

Text Summarization with Pretrained Encoders

Bidirectional Encoder Representations from Transformers (BERT) represent...
08/26/2021

A Computational Approach to Measure Empathy and Theory-of-Mind from Written Texts

Theory-of-mind (ToM), a human ability to infer the intentions and though...
01/15/2021

Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC

The purpose of this project is to evaluate three language models named B...
07/08/2021

Feature Pyramid Network for Multi-task Affective Analysis

Affective Analysis is not a single task, and the valence-arousal value, ...
07/10/2019

Can Unconditional Language Models Recover Arbitrary Sentences?

Neural network-based generative language models like ELMo and BERT can w...
06/17/2021

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference

Natural language inference (NLI) is formulated as a unified framework fo...
09/06/2021

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

Prompt-based knowledge probing for 1-hop relations has been used to meas...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.