Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression

03/03/2017
by   Jangwon Lee, et al.
0

We design a new approach that allows robot learning of new activities from unlabeled human example videos. Given videos of humans executing the same activity from a human's viewpoint (i.e., first-person videos), our objective is to make the robot learn the temporal structure of the activity as its future regression network, and learn to transfer such model for its own motor execution. We present a new deep learning model: We extend the state-of-the-art convolutional object detection network for the representation/estimation of human hands in training videos, and newly introduce the concept of using a fully convolutional network to regress (i.e., predict) the intermediate scene representation corresponding to the future frame (e.g., 1-2 seconds later). Combining these allows direct prediction of future locations of human hands and objects, which enables the robot to infer the motor control plan using our manipulation network. We experimentally confirm that our approach makes learning of robot activities from unlabeled human interaction videos possible, and demonstrate that our robot is able to execute the learned collaborative activities in real-time directly based on its camera input.

READ FULL TEXT

page 2

page 7

page 8

research
05/20/2017

Forecasting Hands and Objects in Future Frames

This paper presents an approach to forecast future presence and location...
research
06/20/2014

Early Recognition of Human Activities from First-Person Videos Using Onset Representations

In this paper, we propose a methodology for early recognition of human a...
research
02/11/2019

Peeking into the Future: Predicting Future Person Activities and Locations in Videos

Deciphering human behaviors to predict their future paths/trajectories a...
research
05/26/2016

Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters

In this paper, we newly introduce the concept of temporal attention filt...
research
11/25/2019

Forecasting Human Object Interaction: Joint Prediction of Motor Attention and Egocentric Activity

We address the challenging task of anticipating human-object interaction...
research
12/05/2017

Learning to Forecast Videos of Human Activity with Multi-granularity Models and Adaptive Rendering

We propose an approach for forecasting video of complex human activity i...
research
11/25/2019

Robot Learning and Execution of Collaborative Manipulation Plans from YouTube Videos

People often watch videos on the web to learn how to cook new recipes, a...

Please sign up or login with your details

Forgot password? Click here to reset