Unlocking Pixels for Reinforcement Learning via Implicit Attention

02/08/2021
by   Krzysztof Choromanski, et al.
1

There has recently been significant interest in training reinforcement learning (RL) agents in vision-based environments. This poses many challenges, such as high dimensionality and potential for observational overfitting through spurious correlations. A promising approach to solve both of these problems is a self-attention bottleneck, which provides a simple and effective framework for learning high performing policies, even in the presence of distractions. However, due to poor scalability of attention architectures, these methods do not scale beyond low resolution visual inputs, using large patches (thus small attention matrices). In this paper we make use of new efficient attention algorithms, recently shown to be highly effective for Transformers, and demonstrate that these new techniques can be applied in the RL setting. This allows our attention-based controllers to scale to larger visual inputs, and facilitate the use of smaller patches, even individual pixels, improving generalization. In addition, we propose a new efficient algorithm approximating softmax attention with what we call hybrid random features, leveraging the theory of angular kernels. We show theoretically and empirically that hybrid random features is a promising approach when using attention for vision-based RL.

READ FULL TEXT

page 1

page 7

page 8

page 12

research
03/18/2020

Neuroevolution of Self-Interpretable Agents

Inattentional blindness is the psychological phenomenon that causes one ...
research
09/28/2021

Making Curiosity Explicit in Vision-based RL

Vision-based reinforcement learning (RL) is a promising technique to sol...
research
10/02/2021

Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning

Vision-based reinforcement learning (RL) is a promising approach to solv...
research
01/07/2021

The Distracting Control Suite – A Challenging Benchmark for Reinforcement Learning from Pixels

Robots have to face challenging perceptual settings, including changes i...
research
08/22/2022

BARReL: Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning

Robustness to adversarial perturbations has been explored in many areas ...
research
10/08/2021

Hybrid Random Features

We propose a new class of random feature methods for linearizing softmax...
research
07/05/2021

Agents that Listen: High-Throughput Reinforcement Learning with Multiple Sensory Systems

Humans and other intelligent animals evolved highly sophisticated percep...

Please sign up or login with your details

Forgot password? Click here to reset