Head and eye egocentric gesture recognition for human-robot interaction using eyewear cameras

01/27/2022
by   Javier Marina-Miranda, et al.
0

Non-verbal communication plays a particularly important role in a wide range of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses the problem of human gesture recognition. In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. We argue that this egocentric view offers a number of conceptual and technical benefits over scene- or robot-centric perspectives. A motion-based recognition approach is proposed, which operates at two temporal granularities. Locally, frame-to-frame homographies are estimated with a convolutional neural network (CNN). The output of this CNN is input to a long short-term memory (LSTM) to capture longer-term temporal visual relationships, which are relevant to characterize gestures. Regarding the configuration of the network architecture, one particularly interesting finding is that using the output of an internal layer of the homography CNN increases the recognition rate with respect to using the homography matrix itself. While this work focuses on action recognition, and no robot or user study has been conducted yet, the system has been de signed to meet real-time constraints. The encouraging results suggest that the proposed egocentric perspective is viable, and this proof-of-concept work provides novel and useful contributions to the exciting area of HRI.

READ FULL TEXT
research
09/21/2021

A Proposed Set of Communicative Gestures for Human Robot Interaction and an RGB Image-based Gesture Recognizer Implemented in ROS

We propose a set of communicative gestures and develop a gesture recogni...
research
02/01/2018

Real-Time Human-Robot Interaction for a Service Robot Based on 3D Human Activity Recognition and Human-like Decision Mechanism

This paper describes the development of a real-time Human-Robot Interact...
research
07/20/2020

Gesture Recognition for Initiating Human-to-Robot Handovers

Human-to-Robot handovers are useful for many Human-Robot Interaction sce...
research
07/02/2020

Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction

Despite the notable progress made in action recognition tasks, not much ...
research
04/13/2023

Online Recognition of Incomplete Gesture Data to Interface Collaborative Robots

Online recognition of gestures is critical for intuitive human-robot int...
research
01/16/2019

Robot Sequential Decision Making using LSTM-based Learning and Logical-probabilistic Reasoning

Sequential decision-making (SDM) plays a key role in intelligent robotic...
research
04/22/2022

Transferring ConvNet Features from Passive to Active Robot Self-Localization: The Use of Ego-Centric and World-Centric Views

The training of a next-best-view (NBV) planner for visual place recognit...

Please sign up or login with your details

Forgot password? Click here to reset