-
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information
Pre-trained language model word representation, such as BERT, have been ...
read it
-
Hostility Detection in Hindi leveraging Pre-Trained Language Models
Hostile content on social platforms is ever increasing. This has led to ...
read it
-
Deep Ensembles for Low-Data Transfer Learning
In the low-data regime, it is difficult to train good supervised models ...
read it
-
AdapterHub: A Framework for Adapting Transformers
The current modus operandi in NLP involves downloading and fine-tuning p...
read it
-
MUDES: Multilingual Detection of Offensive Spans
The interest in offensive content identification in social media has gro...
read it
-
A Biologically Inspired Feature Enhancement Framework for Zero-Shot Learning
Most of the Zero-Shot Learning (ZSL) algorithms currently use pre-traine...
read it
-
Measuring and Reducing Gendered Correlations in Pre-trained Models
Pre-trained models have revolutionized natural language understanding. H...
read it
Incorporating Count-Based Features into Pre-Trained Models for Improved Stance Detection
The explosive growth and popularity of Social Media has revolutionised the way we communicate and collaborate. Unfortunately, this same ease of accessing and sharing information has led to an explosion of misinformation and propaganda. Given that stance detection can significantly aid in veracity prediction, this work focuses on boosting automated stance detection, a task on which pre-trained models have been extremely successful on, as on several other tasks. This work shows that the task of stance detection can benefit from feature based information, especially on certain under performing classes, however, integrating such features into pre-trained models using ensembling is challenging. We propose a novel architecture for integrating features with pre-trained models that address these challenges and test our method on the RumourEval 2019 dataset. This method achieves state-of-the-art results with an F1-score of 63.94 on the test set.
READ FULL TEXT
Comments
There are no comments yet.