What Makes a Good Paraphrase: Do Automated Evaluations Work?

07/27/2023
by   Anna Moskvina, et al.
0

Paraphrasing is the task of expressing an essential idea or meaning in different words. But how different should the words be in order to be considered an acceptable paraphrase? And can we exclusively use automated metrics to evaluate the quality of a paraphrase? We attempt to answer these questions by conducting experiments on a German data set and performing automatic and expert linguistic evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2021

Towards Human-Free Automatic Quality Evaluation of German Summarization

Evaluating large summarization corpora using humans has proven to be exp...
research
02/02/2021

Pecan: An Automated Theorem Prover for Automatic Sequences using Büchi Automata

Pecan is an automated theorem prover for reasoning about properties of S...
research
06/13/2018

Understanding the Meaning of Understanding

Can we train a machine to detect if another machine has understood a con...
research
06/15/2021

Modeling morphology with Linear Discriminative Learning: considerations and design choices

This study addresses a series of methodological questions that arise whe...
research
01/17/2022

MuLVE, A Multi-Language Vocabulary Evaluation Data Set

Vocabulary learning is vital to foreign language learning. Correct and a...
research
05/31/2022

LEXpander: applying colexification networks to automated lexicon expansion

Recent approaches to text analysis from social media and other corpora r...
research
12/19/2012

When you talk about "Information processing" what actually do you have in mind?

"Information Processing" is a recently launched buzzword whose meaning i...

Please sign up or login with your details

Forgot password? Click here to reset