Evaluating Subtitle Segmentation for End-to-end Generation Systems

05/19/2022
by   Alina Karakanta, et al.
3

Subtitles appear on screen as short pieces of text, segmented based on formal constraints (length) and syntactic/semantic criteria. Subtitle segmentation can be evaluated with sequence segmentation metrics against a human reference. However, standard segmentation metrics cannot be applied when systems generate outputs different than the reference, e.g. with end-to-end subtitling systems. In this paper, we study ways to conduct reference-based evaluations of segmentation accuracy irrespective of the textual content. We first conduct a systematic analysis of existing metrics for evaluating subtitle segmentation. We then introduce Sigma, a new Subtitle Segmentation Score derived from an approximate upper-bound of BLEU on segmentation boundaries, which allows us to disentangle the effect of good segmentation from text quality. To compare Sigma with existing metrics, we further propose a boundary projection method from imperfect hypotheses to the true reference. Results show that all metrics are able to reward high quality output but for similar outputs system ranking depends on each metric's sensitivity to error type. Our thorough analyses suggest Sigma is a promising segmentation candidate but its reliability over other segmentation metrics remains to be validated through correlations with human judgements.

READ FULL TEXT
research
01/30/2019

Reference-less Quality Estimation of Text Simplification Systems

The evaluation of text simplification (TS) systems remains an open chall...
research
02/17/2022

Revisiting the Evaluation Metrics of Paraphrase Generation

Paraphrase generation is an important NLP task that has achieved signifi...
research
05/22/2023

Improving Metrics for Speech Translation

We introduce Parallel Paraphrasing (Para_both), an augmentation method f...
research
05/30/2021

REAM♯: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation

The lack of reliable automatic evaluation metrics is a major impediment ...
research
01/23/2019

Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge

This paper provides a detailed summary of the first shared task on End-t...
research
07/11/2018

A Formal Account of Effectiveness Evaluation and Ranking Fusion

This paper proposes a theoretical framework which models the information...
research
05/20/2020

Evaluating Features and Metrics for High-Quality Simulation of Early Vocal Learning of Vowels

The way infants use auditory cues to learn to speak despite the acoustic...

Please sign up or login with your details

Forgot password? Click here to reset