Recurrent Soft Attention Model for Common Object Recognition

05/04/2017
by   Liliang Ren, et al.
0

We propose the Recurrent Soft Attention Model, which integrates the visual attention from the original image to a LSTM memory cell through a down-sample network. The model recurrently transmits visual attention to the memory cells for glimpse mask generation, which is a more natural way for attention integration and exploitation in general object detection and recognition problem. We test our model under the metric of the top-1 accuracy on the CIFAR-10 dataset. The experiment shows that our down-sample network and feedback mechanism plays an effective role among the whole network structure.

READ FULL TEXT
research
09/27/2022

Reconstruction-guided attention improves the robustness and shape processing of neural networks

Many visual phenomena suggest that humans use top-down generative or rec...
research
10/11/2021

Recurrent Attention Models with Object-centric Capsule Representation for Multi-object Recognition

The visual system processes a scene using a sequence of selective glimps...
research
11/11/2019

Conditionally Learn to Pay Attention for Sequential Visual Task

Sequential visual task usually requires to pay attention to its current ...
research
06/12/2017

Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition

We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) - an...
research
02/15/2018

Teaching Machines to Code: Neural Markup Generation with Visual Attention

We present a deep recurrent neural network model with soft visual attent...
research
05/09/2017

CHAM: action recognition using convolutional hierarchical attention model

Recently, the soft attention mechanism, which was originally proposed in...
research
04/28/2018

CRAM: Clued Recurrent Attention Model

To overcome the poor scalability of convolutional neural network, recurr...

Please sign up or login with your details

Forgot password? Click here to reset