DeepAI AI Chat
Log In Sign Up

ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection

by   Juan Manuel Pérez, et al.

This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research


BERT Based Multilingual Machine Comprehension in English and Hindi

Multilingual Machine Comprehension (MMC) is a Question-Answering (QA) su...

Towards Lingua Franca Named Entity Recognition with BERT

Information extraction is an important task in NLP, enabling the automat...

75 Languages, 1 Model: Parsing Universal Dependencies Universally

We present UDify, a multilingual multi-task model capable of accurately ...

OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection

We propose a multilingual adversarial training model for determining whe...

Multilingual and Multimodal Topic Modelling with Pretrained Embeddings

This paper presents M3L-Contrast – a novel multimodal multilingual (M3L)...

Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

In multilingual societies like the Indian subcontinent, use of code-swit...