Neuroevolution of Self-Interpretable Agents

03/18/2020
by   Yujin Tang, et al.
12

Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning (RL) tasks, allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail. Videos of our results and source code available at https://attentionagent.github.io/

READ FULL TEXT

page 1

page 6

page 8

page 9

research
07/09/2020

Attention or memory? Neurointerpretable agents in space and time

In neuroscience, attention has been shown to bidirectionally interact wi...
research
02/08/2021

Unlocking Pixels for Reinforcement Learning via Implicit Attention

There has recently been significant interest in training reinforcement l...
research
09/15/2022

Towards self-attention based visual navigation in the real world

Vision guided navigation requires processing complex visual information ...
research
04/07/2023

PSLT: A Light-weight Vision Transformer with Ladder Self-Attention and Progressive Shift

Vision Transformer (ViT) has shown great potential for various visual ta...
research
03/16/2020

Self-Supervised Discovering of Causal Features: Towards Interpretable Reinforcement Learning

Deep reinforcement learning (RL) has recently led to many breakthroughs ...
research
04/06/2019

Reinforcement Learning with Attention that Works: A Self-Supervised Approach

Attention models have had a significant positive impact on deep learning...
research
12/07/2021

Hybrid Self-Attention NEAT: A novel evolutionary approach to improve the NEAT algorithm

This article presents a "Hybrid Self-Attention NEAT" method to improve t...

Please sign up or login with your details

Forgot password? Click here to reset