Paraphrase Detection on Noisy Subtitles in Six Languages

09/21/2018
by   Eetu Sjöblom, et al.
0

We perform automatic paraphrase detection on subtitle data from the Opusparcus corpus comprising six European languages: German, English, Finnish, French, Russian, and Swedish. We train two types of supervised sentence embedding models: a word-averaging (WA) model and a gated recurrent averaging network (GRAN) model. We find out that GRAN outperforms WA and is more robust to noisy training data. Better results are obtained with more and noisier data than less and cleaner data. Additionally, we experiment on other datasets, without reaching the same level of performance, because of domain mismatch between training and test data.

READ FULL TEXT
research
02/08/2021

Effects of Layer Freezing when Transferring DeepSpeech to New Languages

In this paper, we train Mozilla's DeepSpeech architecture on German and ...
research
09/17/2018

Open Subtitles Paraphrase Corpus for Six Languages

This paper accompanies the release of Opusparcus, a new paraphrase corpu...
research
04/30/2017

Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings

We consider the problem of learning general-purpose, paraphrastic senten...
research
07/26/2021

Multilingual Coreference Resolution with Harmonized Annotations

In this paper, we present coreference resolution experiments with a newl...
research
11/21/2022

Extended Multilingual Protest News Detection – Shared Task 1, CASE 2021 and 2022

We report results of the CASE 2022 Shared Task 1 on Multilingual Protest...
research
10/31/2016

Generating Sentiment Lexicons for German Twitter

Despite a substantial progress made in developing new sentiment lexicon ...
research
12/17/2020

Benchmarking Automatic Detection of Psycholinguistic Characteristics for Better Human-Computer Interaction

When two people pay attention to each other and are interested in what t...

Please sign up or login with your details

Forgot password? Click here to reset