Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction

03/14/2017
by   Benoit Massé, et al.
0

The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in estimating and tracking the VFOAs associated with multi-party social interactions. We note that in this type of situations the participants either look at each other or at an object of interest; therefore their eyes are not always visible. Consequently both gaze and VFOA estimation cannot be based on eye detection and tracking. We propose a method that exploits the correlation between eye gaze and head movements. Both VFOA and gaze are modeled as latent variables in a Bayesian switching state-space model. The proposed formulation leads to a tractable learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. The method is tested and benchmarked using two publicly available datasets that contain typical multi-party human-robot and human-human interactions.

READ FULL TEXT

page 3

page 7

page 10

page 11

research
08/24/2022

Judging by the Look: The Impact of Robot Gaze Strategies on Human Cooperation

Human eye gaze plays an important role in delivering information, commun...
research
09/10/2020

Non-contact Real time Eye Gaze Mapping System Based on Deep Convolutional Neural Network

Human-Computer Interaction(HCI) is a field that studies interactions bet...
research
08/25/2023

iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention

This paper explores the role of eye gaze in human-robot interactions and...
research
11/06/2019

Privacy Preserving Gaze Estimation using Synthetic Images via a Randomized Encoding Based Framework

Eye tracking is handled as one of the key technologies for applications ...
research
11/08/2020

Integrating Human Gaze into Attention for Egocentric Activity Recognition

It is well known that human gaze carries significant information about v...
research
02/19/2020

EyeTAP: A Novel Technique using Voice Inputs to Address the Midas Touch Problem for Gaze-based Interactions

One of the main challenges of gaze-based interactions is the ability to ...
research
11/18/2017

Neural Network Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

This paper introduces a novel neural network-based reinforcement learnin...

Please sign up or login with your details

Forgot password? Click here to reset