EGO-TOPO: Environment Affordances from Egocentric Video

01/14/2020
by   Tushar Nagarajan, et al.
13

First-person video naturally brings the use of a physical environment to the forefront, since it shows the camera wearer interacting fluidly in a space based on his intentions. However, current methods largely separate the observed actions from the persistent space itself. We introduce a model for environment affordances that is learned directly from egocentric video. The main idea is to gain a human-centric model of a physical space (such as a kitchen) that captures (1) the primary spatial zones of interaction and (2) the likely activities they support. Our approach decomposes a space into a topological map derived from first-person activity, organizing an ego-video into a series of visits to the different zones. Further, we show how to link zones across multiple related environments (e.g., from videos of multiple kitchens) to obtain a consolidated representation of environment functionality. On EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene affordances and anticipating future actions in long-form video.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

page 12

page 13

page 14

research
07/22/2022

Egocentric scene context for human-centric environment understanding from video

First-person video highlights a camera-wearer's activities in the contex...
research
05/20/2021

Egocentric Activity Recognition and Localization on a 3D Map

Given a video captured from a first person perspective and recorded in a...
research
10/20/2022

Rethinking Learning Approaches for Long-Term Action Anticipation

Action anticipation involves predicting future actions having observed t...
research
06/20/2014

Early Recognition of Human Activities from First-Person Videos Using Onset Representations

In this paper, we propose a methodology for early recognition of human a...
research
03/10/2020

Video Caption Dataset for Describing Human Actions in Japanese

In recent years, automatic video caption generation has attracted consid...
research
07/17/2023

Video-Mined Task Graphs for Keystep Recognition in Instructional Videos

Procedural activity understanding requires perceiving human actions in t...
research
01/21/2021

Hierarchical Graph-RNNs for Action Detection of Multiple Activities

In this paper, we propose an approach that spatially localizes the activ...

Please sign up or login with your details

Forgot password? Click here to reset