A Variational Graph Autoencoder for Manipulation Action Recognition and Prediction

10/25/2021
by   Gamze Akyol, et al.
5

Despite decades of research, understanding human manipulation activities is, and has always been, one of the most attractive and challenging research topics in computer vision and robotics. Recognition and prediction of observed human manipulation actions have their roots in the applications related to, for instance, human-robot interaction and robot learning from demonstration. The current research trend heavily relies on advanced convolutional neural networks to process the structured Euclidean data, such as RGB camera images. These networks, however, come with immense computational complexity to be able to process high dimensional raw data. Different from the related works, we here introduce a deep graph autoencoder to jointly learn recognition and prediction of manipulation tasks from symbolic scene graphs, instead of relying on the structured Euclidean data. Our network has a variational autoencoder structure with two branches: one for identifying the input graph type and one for predicting the future graphs. The input of the proposed network is a set of semantic graphs which store the spatial relations between subjects and objects in the scene. The network output is a label set representing the detected and predicted class types. We benchmark our new model against different state-of-the-art methods on two different datasets, MANIAC and MSRC-9, and show that our proposed model can achieve better performance. We also release our source code https://github.com/gamzeakyol/GNet.

READ FULL TEXT

page 1

page 3

page 4

research
07/26/2023

Event-based Vision for Early Prediction of Manipulation Actions

Neuromorphic visual sensors are artificial retinas that output sequences...
research
04/22/2020

Human and Machine Action Prediction Independent of Object Information

Predicting other people's action is key to successful social interaction...
research
08/22/2019

Learning Object-Action Relations from Bimanual Human Demonstration Using Graph Networks

Recognising human actions is a vital task for a humanoid robot, especial...
research
10/16/2022

Stochastic Occupancy Grid Map Prediction in Dynamic Scenes

This paper presents two variations of a novel stochastic prediction algo...
research
08/08/2021

One-Shot Object Affordance Detection in the Wild

Affordance detection refers to identifying the potential action possibil...
research
08/10/2020

Multimodal Deep Generative Models for Trajectory Prediction: A Conditional Variational Autoencoder Approach

Human behavior prediction models enable robots to anticipate how humans ...
research
04/20/2020

Spatial Action Maps for Mobile Manipulation

This paper proposes a new action representation for learning to perform ...

Please sign up or login with your details

Forgot password? Click here to reset