DeepAI AI Chat
Log In Sign Up

ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection

08/13/2020
by   Juan Manuel Pérez, et al.
0

This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research

READ FULL TEXT
06/02/2020

BERT Based Multilingual Machine Comprehension in English and Hindi

Multilingual Machine Comprehension (MMC) is a Question-Answering (QA) su...
11/19/2019

Towards Lingua Franca Named Entity Recognition with BERT

Information extraction is an important task in NLP, enabling the automat...
04/03/2019

75 Languages, 1 Model: Parsing Universal Dependencies Universally

We present UDify, a multilingual multi-task model capable of accurately ...
06/07/2022

OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection

We propose a multilingual adversarial training model for determining whe...
11/15/2022

Multilingual and Multimodal Topic Modelling with Pretrained Embeddings

This paper presents M3L-Contrast – a novel multimodal multilingual (M3L)...
09/23/2018

Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

In multilingual societies like the Indian subcontinent, use of code-swit...