Multilingual Coreference Resolution with Harmonized Annotations

07/26/2021
by   Ondřej Pražák, et al.
0

In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD. We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monolingual experiments, we combine the training data in multilingual experiments and train two joined models – for Slavic languages and for all the languages together. We rely on an end-to-end deep learning model that we slightly adapted for the CorefUD corpus. Our results show that we can profit from harmonized annotations, and using joined models helps significantly for the languages with smaller training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2019

Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model

Multilingual end-to-end (E2E) models have shown great promise in expansi...
research
10/05/2020

X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset

Even though SRL is researched for many languages, major improvements hav...
research
09/26/2022

End-to-end Multilingual Coreference Resolution with Mention Head Prediction

This paper describes our approach to the CRAC 2022 Shared Task on Multil...
research
04/28/2023

Training and Evaluation of a Multilingual Tokenizer for GPT-SW3

This paper provides a detailed discussion of the multilingual tokenizer ...
research
09/21/2018

Paraphrase Detection on Noisy Subtitles in Six Languages

We perform automatic paraphrase detection on subtitle data from the Opus...
research
05/30/2016

Going Deeper for Multilingual Visual Sentiment Detection

This technical report details several improvements to the visual concept...
research
06/02/2021

OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres

SOTA coreference resolution produces increasingly impressive scores on t...

Please sign up or login with your details

Forgot password? Click here to reset