Learning a Visually Grounded Memory Assistant

10/07/2022
by   Meera Hahn, et al.
7

We introduce a novel interface for large scale collection of human memory and assistance. Using the 3D Matterport simulator we create a realistic indoor environments in which we have people perform specific embodied memory tasks that mimic household daily activities. This interface was then deployed on Amazon Mechanical Turk allowing us to test and record human memory, navigation and needs for assistance at a large scale that was previously impossible. Using the interface we collect the `The Visually Grounded Memory Assistant Dataset' which is aimed at developing our understanding of (1) the information people encode during navigation of 3D environments and (2) conditions under which people ask for memory assistance. Additionally we experiment with with predicting when people will ask for assistance using models trained on hand-selected visual and semantic features. This provides an opportunity to build stronger ties between the machine-learning and cognitive-science communities through learned models of human perception, memory, and cognition.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
09/04/2019

Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning

Mobile agents that can leverage help from humans can potentially accompl...
research
11/20/2017

Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

A robot that can carry out a natural-language instruction has been a dre...
research
12/01/2021

Secure and Safety Mobile Network System for Visually Impaired People

The proposed system aims to be a techno-friend of visually impaired peop...
research
12/10/2018

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

We present Vision-based Navigation with Language-based Assistance (VNLA)...
research
11/18/2022

Ask4Help: Learning to Leverage an Expert for Embodied Tasks

Embodied AI agents continue to become more capable every year with the a...
research
06/13/2012

Toward Experiential Utility Elicitation for Interface Customization

User preferences for automated assistance often vary widely, depending o...
research
10/29/2019

Navigation Agents for the Visually Impaired: A Sidewalk Simulator and Experiments

Millions of blind and visually-impaired (BVI) people navigate urban envi...

Please sign up or login with your details

Forgot password? Click here to reset