Human and Machine Action Prediction Independent of Object Information

04/22/2020
by   Fatemeh Ziaeetabar, et al.
0

Predicting other people's action is key to successful social interactions, enabling us to adjust our own behavior to the consequence of the others' future actions. Studies on action recognition have focused on the importance of individual visual features of objects involved in an action and its context. Humans, however, recognize actions on unknown objects or even when objects are imagined (pantomime). Other cues must thus compensate the lack of recognizable visual object features. Here, we focus on the role of inter-object relations that change during an action. We designed a virtual reality setup and tested recognition speed for 10 different manipulation actions on 50 subjects. All objects were abstracted by emulated cubes so the actions could not be inferred using object information. Instead, subjects had to rely only on the information that comes from the changes in the spatial relations that occur between those cubes. In spite of these constraints, our results show the subjects were able to predict actions in, on average, less than 64 employed a computational model -an enriched Semantic Event Chain (eSEC)- incorporating the information of spatial relations, specifically (a) objects' touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects. Trained on the same actions as those observed by subjects, the model successfully predicted actions even better than humans. Information theoretical analysis shows that eSECs optimally use individual cues, whereas humans presumably mostly rely on a mixed-cue strategy, which takes longer until recognition. Providing a better cognitive basis of action recognition may, on one hand improve our understanding of related human pathologies and, on the other hand, also help to build robots for conflict-free human-robot cooperation. Our results open new avenues here.

READ FULL TEXT

page 7

page 9

research
07/03/2019

Action Prediction in Humans and Robots

Efficient action prediction is of central importance for the fluent work...
research
02/20/2020

Learning Intermediate Features of Object Affordances with a Convolutional Neural Network

Our ability to interact with the world around us relies on being able to...
research
05/16/2019

Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

Object manipulation actions represent an important share of the Activiti...
research
02/26/2019

Beyond the Self: Using Grounded Affordances to Interpret and Describe Others' Actions

We propose a developmental approach that allows a robot to interpret and...
research
10/25/2021

A Variational Graph Autoencoder for Manipulation Action Recognition and Prediction

Despite decades of research, understanding human manipulation activities...
research
05/02/2019

Egocentric Hand Track and Object-based Human Action Recognition

Egocentric vision is an emerging field of computer vision that is charac...
research
12/14/2015

Watch-Bot: Unsupervised Learning for Reminding Humans of Forgotten Actions

We present a robotic system that watches a human using a Kinect v2 RGB-D...

Please sign up or login with your details

Forgot password? Click here to reset