Learning to Ground Instructional Articles in Videos through Narrations

06/06/2023
by   Effrosyni Mavroudi, et al.
0

In this paper we present an approach for localizing steps of procedural activities in narrated how-to videos. To deal with the scarcity of labeled data at scale, we source the step descriptions from a language knowledge base (wikiHow) containing instructional articles for a large variety of procedural tasks. Without any form of manual supervision, our model learns to temporally ground the steps of procedural articles in how-to videos by matching three modalities: frames, narrations, and step descriptions. Specifically, our method aligns steps to video by fusing information from two distinct pathways: i) direct alignment of step descriptions to frames, ii) indirect alignment obtained by composing steps-to-narrations with narrations-to-video correspondences. Notably, our approach performs global temporal grounding of all steps in an article at once by exploiting order information, and is trained with step pseudo-labels which are iteratively refined and aggressively filtered. In order to validate our model we introduce a new evaluation benchmark – HT-Step – obtained by manually annotating a 124-hour subset of HowTo100M[A test server is accessible at <https://eval.ai/web/challenges/challenge-page/2082>.] with steps sourced from wikiHow articles. Experiments on this benchmark as well as zero-shot evaluations on CrossTask demonstrate that our multi-modality alignment yields dramatic gains over several baselines and prior works. Finally, we show that our inner module for matching narration-to-video outperforms by a large margin the state of the art on the HTM-Align narration-video alignment benchmark.

READ FULL TEXT

page 4

page 16

page 17

research
01/26/2022

Learning To Recognize Procedural Activities with Distant Supervision

In this paper we consider the problem of classifying fine-grained, multi...
research
04/06/2022

Temporal Alignment Networks for Long-term Video

The objective of this paper is a temporal alignment network that ingests...
research
04/26/2023

StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos

Instructional videos are an important resource to learn procedural tasks...
research
06/26/2023

A Solution to CVPR'2023 AQTC Challenge: Video Alignment for Multi-Step Inference

Affordance-centric Question-driven Task Completion (AQTC) for Egocentric...
research
03/24/2023

Aligning Step-by-Step Instructional Diagrams to Video Demonstrations

Multimodal alignment facilitates the retrieval of instances from one mod...
research
09/09/2021

Reconstructing and grounding narrated instructional videos in 3D

Narrated instructional videos often show and describe manipulations of s...
research
03/23/2023

Learning and Verification of Task Structure in Instructional Videos

Given the enormous number of instructional videos available online, lear...

Please sign up or login with your details

Forgot password? Click here to reset