DeepAI AI Chat
Log In Sign Up

Pre-training Attention Mechanisms

12/15/2017
by   Jack Lindsey, et al.
Stanford University
0

Recurrent neural networks with differentiable attention mechanisms have had success in generative and classification tasks. We show that the classification performance of such models can be enhanced by guiding a randomly initialized model to attend to salient regions of the input in early training iterations. We further show that, if explicit heuristics for guidance are unavailable, a model that is pretrained on an unsupervised reconstruction task can discover good attention policies without supervision. We demonstrate that increased efficiency of the attention mechanism itself contributes to these performance improvements. Based on these insights, we introduce bootstrapped glimpse mimicking, a simple, theoretically task-general method of more effectively training attention models. Our work draws inspiration from and parallels results on human learning of attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/13/2021

Where to Look: A Unified Attention Model for Visual Recognition with Reinforcement Learning

The idea of using the recurrent neural network for visual attention has ...
11/22/2022

Simulating Human Gaze with Neural Visual Attention

Existing models of human visual attention are generally unable to incorp...
07/04/2015

Describing Multimedia Content using Attention-based Encoder--Decoder Networks

Whereas deep neural networks were first mostly used for classification t...
09/19/2016

A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size Representations

The softmax content-based attention mechanism has proven to be very bene...
11/19/2020

On the Dynamics of Training Attention Models

The attention mechanism has been widely used in deep neural networks as ...
12/02/2020

Attention-gating for improved radio galaxy classification

In this work we introduce attention as a state of the art mechanism for ...