Next-Active-Object prediction from Egocentric Videos

04/10/2019
by   Antonino Furnari, et al.
1

Although First Person Vision systems can sense the environment from the user's perspective, they are generally unable to predict his intentions and goals. Since human activities can be decomposed in terms of atomic actions and interactions with objects, intelligent wearable systems would benefit from the ability to anticipate user-object interactions. Even if this task is not trivial, the First Person Vision paradigm can provide important cues to address this challenge. We propose to exploit the dynamics of the scene to recognize next-active-objects before an object interaction begins. We train a classifier to discriminate trajectories leading to an object activation from all others and forecast next-active-objects by analyzing fixed-length trajectory segments within a temporal sliding window. The proposed method compares favorably with respect to several baselines on the Activity of Daily Living (ADL) egocentric dataset comprising 10 hours of videos acquired by 20 subjects while performing unconstrained interactions with several objects.

READ FULL TEXT

page 2

page 5

page 13

research
10/12/2020

The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain

Wearable cameras allow to collect images and videos of humans interactin...
research
11/14/2022

Discovering A Variety of Objects in Spatio-Temporal Human-Object Interactions

Spatio-temporal Human-Object Interaction (ST-HOI) detection aims at dete...
research
09/12/2022

Graphing the Future: Activity and Next Active Object Prediction using Graph-based Activity Representations

We present a novel approach for the visual prediction of human-object in...
research
12/16/2021

Human Hands as Probes for Interactive Object Understanding

Interactive object understanding, or what we can do to objects and how i...
research
09/19/2022

MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial-like Domain

Wearable cameras allow to acquire images and videos from the user's pers...
research
11/23/2022

Learning to Imitate Object Interactions from Internet Videos

We study the problem of imitating object interactions from Internet vide...
research
06/03/2019

How Much Does Audio Matter to Recognize Egocentric Object Interactions?

Sounds are an important source of information on our daily interactions ...

Please sign up or login with your details

Forgot password? Click here to reset