Identifying Visible Actions in Lifestyle Vlogs

06/10/2019
by   Oana Ignat, et al.
1

We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present. We construct a dataset with crowdsourced manual annotations of visible actions, and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video. We demonstrate that our multimodal algorithm outperforms algorithms based only on one modality at a time.

READ FULL TEXT
research
09/06/2021

WhyAct: Identifying Action Reasons in Lifestyle Vlogs

We aim to automatically identify human action reasons in online videos. ...
research
06/27/2021

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

This paper introduces a new video-and-language dataset with human action...
research
03/10/2020

Video Caption Dataset for Describing Human Actions in Japanese

In recent years, automatic video caption generation has attracted consid...
research
04/19/2022

ActAR: Actor-Driven Pose Embeddings for Video Action Recognition

Human action recognition (HAR) in videos is one of the core tasks of vid...
research
09/15/2016

Visible Light-Based Human Visual System Conceptual Model

There is a widely held belief in the digital image and video processing ...
research
02/17/2023

Multimodal Subtask Graph Generation from Instructional Videos

Real-world tasks consist of multiple inter-dependent subtasks (e.g., a d...
research
04/06/2021

Localizing Visual Sounds the Hard Way

The objective of this work is to localize sound sources that are visible...

Please sign up or login with your details

Forgot password? Click here to reset