Visually plausible human-object interaction capture from wearable sensors

05/05/2022
by   Vladimir Guzov, et al.
0

In everyday lives, humans naturally modify the surrounding environment through interactions, e.g., moving a chair to sit on it. To reproduce such interactions in virtual spaces (e.g., metaverse), we need to be able to capture and model them, including changes in the scene geometry, ideally from ego-centric input alone (head camera and body-worn inertial sensors). This is an extremely hard problem, especially since the object/scene might not be visible from the head camera (e.g., a human not looking at a chair while sitting down, or not looking at the door handle while opening a door). In this paper, we present HOPS, the first method to capture interactions such as dragging objects and opening doors from ego-centric data alone. Central to our method is reasoning about human-object interactions, allowing to track objects even when they are not visible from the head camera. HOPS localizes and registers both the human and the dynamic object in a pre-scanned static scene. HOPS is an important first step towards advanced AR/VR applications based on immersive virtual universes, and can provide human-centric training data to teach machines to interact with their surroundings. The supplementary video, data, and code will be available on our project page at http://virtualhumans.mpi-inf.mpg.de/hops/

READ FULL TEXT

page 1

page 3

page 4

page 5

page 9

page 11

page 12

page 19

research
03/31/2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors

We introduce (HPS) Human POSEitioning System, a method to recover the fu...
research
04/14/2022

BEHAVE: Dataset and Method for Tracking Human Object Interactions

Modelling interactions between humans and objects in natural environment...
research
02/21/2021

CheckSoft : A Scalable Event-Driven Software Architecture for Keeping Track of People and Things in People-Centric Spaces

We present CheckSoft, a scalable event-driven software architecture for ...
research
04/27/2023

Compositional 3D Human-Object Neural Animation

Human-object interactions (HOIs) are crucial for human-centric scene und...
research
05/16/2023

Learning Higher-order Object Interactions for Keypoint-based Video Understanding

Action recognition is an important problem that requires identifying act...
research
07/22/2022

Egocentric scene context for human-centric environment understanding from video

First-person video highlights a camera-wearer's activities in the contex...
research
04/12/2023

Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views

Automatic perception of human behaviors during social interactions is cr...

Please sign up or login with your details

Forgot password? Click here to reset