Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform

03/04/2021
by   Mahendran Subramanian, et al.
0

We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how the user looks at the environment to understand where they want to navigate their mobility device. However, many natural eye-movements are not relevant for action intention decoding, only some are, which places a challenge on decoding, the so-called Midas Touch Problem. Here, we present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view, with 2. an analysis of where on the object's bounding box the user is looking, to 3. use a simple machine learning classifier to determine whether the overt visual attention on the object is predictive of a navigation intention to that object. Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it. Crucially, we find that when users look at an object and imagine they were moving towards it, the resulting eye-movements from this motor imagery (akin to neural interfaces) remain decodable. Once a driving intention and thus also the location is detected our system instructs our autonomous wheelchair platform, the A.Eye-Drive, to navigate to the desired object while avoiding static and moving obstacles. Thus, for navigation purposes, we have realised a cognitive-level human interface, as it requires the user only to cognitively interact with the desired goal, not to continuously steer their wheelchair to the target (low-level human interfacing).

READ FULL TEXT

page 1

page 2

page 3

research
09/17/2019

What Are You Looking at? Detecting Human Intention in Gaze based Human-Robot Interaction

In gaze based Human-Robot Interaction (HRI), it is important to determin...
research
01/22/2022

MIDAS: Deep learning human action intention prediction from natural eye movement patterns

Eye movements have long been studied as a window into the attentional me...
research
02/09/2023

Gaze-based intention estimation: principles, methodologies, and applications in HRI

Intention prediction has become a relevant field of research in Human-Ma...
research
11/20/2022

Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models

Eye-tracking has potential to provide rich behavioral data about human c...
research
08/01/2018

Saccadic Predictive Vision Model with a Fovea

We propose a model that emulates saccades, the rapid movements of the ey...
research
12/05/2017

Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots

This paper shows and evaluates a novel approach to integrate a non-invas...
research
10/15/2018

Towards Intention Prediction for Handheld Robots: a Case of Simulated Block Copying

Within this work, we explore intention inference for user actions in the...

Please sign up or login with your details

Forgot password? Click here to reset