Visual Story Post-Editing

06/05/2019
by   Ting-Yao Hsu, et al.
0

We introduce the first dataset for human edits of machine-generated visual stories and explore how these collected edits may be used for the visual story post-editing task. The dataset, VIST-Edit, includes 14,905 human edited versions of 2,981 machine-generated visual stories. The stories were generated by two state-of-the-art visual storytelling models, each aligned to 5 human-edited versions. We establish baselines for the task, showing how a relatively small set of human edits can be leveraged to boost the performance of large visual storytelling models. We also discuss the weak correlation between automatic evaluation scores and human ratings, motivating the need for new automatic metrics.

READ FULL TEXT

page 1

page 2

page 5

research
04/13/2016

Visual Storytelling

We introduce the first dataset for sequential vision-to-language, and ex...
research
10/16/2022

StoryER: Automatic Story Evaluation via Ranking, Rating and Reasoning

Existing automatic story evaluation methods place a premium on story lex...
research
05/11/2022

SubER: A Metric for Automatic Evaluation of Subtitle Quality

This paper addresses the problem of evaluating the quality of automatica...
research
02/22/2019

On How Users Edit Computer-Generated Visual Stories

A significant body of research in Artificial Intelligence (AI) has focus...
research
07/17/2019

Towards Data-Driven Automatic Video Editing

Automatic video editing involving at least the steps of selecting the mo...
research
03/16/2023

HIVE: Harnessing Human Feedback for Instructional Visual Editing

Incorporating human feedback has been shown to be crucial to align text ...
research
07/24/2019

Translator2Vec: Understanding and Representing Human Post-Editors

The combination of machines and humans for translation is effective, wit...

Please sign up or login with your details

Forgot password? Click here to reset