Towards Human-Free Automatic Quality Evaluation of German Summarization

05/13/2021
by   Neslihan Iskender, et al.
0

Evaluating large summarization corpora using humans has proven to be expensive from both the organizational and the financial perspective. Therefore, many automatic evaluation metrics have been developed to measure the summarization quality in a fast and reproducible way. However, most of the metrics still rely on humans and need gold standard summaries generated by linguistic experts. Since BLANC does not require golden summaries and supposedly can use any underlying language model, we consider its application to the evaluation of summarization in German. This work demonstrates how to adjust the BLANC metric to a language other than English. We compare BLANC scores with the crowd and expert ratings, as well as with commonly used automatic metrics on a German summarization data set. Our results show that BLANC in German is especially good in evaluating informativeness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2023

Revisiting Automatic Question Summarization Evaluation in the Biomedical Domain

Automatic evaluation metrics have been facilitating the rapid developmen...
research
09/16/2021

Does Summary Evaluation Survive Translation to Other Languages?

The creation of a large summarization quality dataset is a considerable,...
research
07/27/2023

What Makes a Good Paraphrase: Do Automated Evaluations Work?

Paraphrasing is the task of expressing an essential idea or meaning in d...
research
01/17/2023

On the State of German (Abstractive) Text Summarization

With recent advancements in the area of Natural Language Processing, the...
research
05/25/2023

Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation

Argument summarisation is a promising but currently under-explored field...
research
04/21/2022

Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics

How reliably an automatic summarization evaluation metric replicates hum...
research
12/02/2021

InfoLM: A New Metric to Evaluate Summarization Data2Text Generation

Assessing the quality of natural language generation systems through hum...

Please sign up or login with your details

Forgot password? Click here to reset