Attend Before you Act: Leveraging human visual attention for continual learning

07/25/2018
by   Khimya Khetarpal, et al.
0

When humans perform a task, such as playing a game, they selectively pay attention to certain parts of the visual input, gathering relevant information and sequentially combining it to build a representation from the sensory data. In this work, we explore leveraging where humans look in an image as an implicit indication of what is salient for decision making. We build on top of the UNREAL architecture in DeepMind Lab's 3D navigation maze environment. We train the agent both with original images and foveated images, which were generated by overlaying the original images with saliency maps generated using a real-time spectral residual technique. We investigate the effectiveness of this approach in transfer learning by measuring performance in the context of noise in the environment.

READ FULL TEXT

page 2

page 4

research
12/17/2016

Learning to predict where to look in interactive environments using deep recurrent q-learning

Bottom-Up (BU) saliency models do not perform well in complex interactiv...
research
11/14/2017

Saliency-based Sequential Image Attention with Multiset Prediction

Humans process visual scenes selectively and sequentially using attentio...
research
09/15/2020

Attention-SLAM: A Visual Monocular SLAM Learning from Human Gaze

This paper proposes a novel simultaneous localization and mapping (SLAM)...
research
12/21/2022

Continual Learning Approaches for Anomaly Detection

Anomaly Detection is a relevant problem that arises in numerous real-wor...
research
04/24/2017

Paying Attention to Descriptions Generated by Image Captioning Models

To bridge the gap between humans and machines in image understanding and...
research
08/20/2023

Automated mapping of virtual environments with visual predictive coding

Humans construct internal cognitive maps of their environment directly f...

Please sign up or login with your details

Forgot password? Click here to reset