AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

09/07/2022
by   Beatrice Portelli, et al.
9

This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.

READ FULL TEXT
research
08/09/2019

BERT-based Ranking for Biomedical Entity Normalization

Developing high-performance entity normalization algorithms that can all...
research
08/02/2023

UPB at IberLEF-2023 AuTexTification: Detection of Machine-Generated Text using Transformer Ensembles

This paper describes the solutions submitted by the UPB team to the AuTe...
research
02/01/2022

Transformer-based Models of Text Normalization for Speech Applications

Text normalization, or the process of transforming text into a consisten...
research
09/15/2021

Discriminative and Generative Transformer-based Models For Situation Entity Classification

We re-examine the situation entity (SE) classification task with varying...
research
05/07/2023

Stanford MLab at SemEval-2023 Task 10: Exploring GloVe- and Transformer-Based Methods for the Explainable Detection of Online Sexism

In this paper, we discuss the methods we applied at SemEval-2023 Task 10...
research
09/28/2020

Fancy Man Lauches Zippo at WNUT 2020 Shared Task-1: A Bert Case Model for Wet Lab Entity Extraction

Automatic or semi-automatic conversion of protocols specifying steps in ...
research
07/21/2020

IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection

This paper describes the system proposed for addressing the research pro...

Please sign up or login with your details

Forgot password? Click here to reset