Multiple Object Recognition with Visual Attention

12/24/2014 ∙ by Jimmy Ba, et al. ∙ Google 0

We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.

READ FULL TEXT VIEW PDF

Authors

page 6

Code Repositories

ram_modified

"Recurrent Models of Visual Attention" in TensorFlow


view repo

lasagne-ram

re-implementation of Recurrent Models of Visual Attention in Lasagne (Theano)


view repo

ram

Implementation of the location-guided deep recurrent attention model (LG-DRAM) I developed for my MSc thesis at UCL


view repo

visual-attention-multiple-objects

An implementation of MULTIPLE OBJECT RECOGNITION WITH VISUAL ATTENTION paper at: http://arxiv.org/abs/1412.7755


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks have recently been very successful on a variety of recognition and classification tasks (Krizhevsky et al., 2012; Goodfellow et al., 2013; Jaderberg et al., 2014a; Vinyals et al., 2014; Karpathy et al., 2014). One of the main drawbacks of convolutional networks (ConvNets) is their poor scalability with increasing input image size so efficient implementations of these models on multiple GPUs (Krizhevsky et al., 2012) or even spanning multiple machines (Dean et al., 2012b) have become necessary.

Applications of ConvNets to multi-object and sequence recognition from images have avoided working with big images and instead focused on using ConvNets for recognizing characters or short sequence segments from image patches containing reasonably tightly cropped instances (Goodfellow et al., 2013; Jaderberg et al., 2014a). Applying such a recognizer to large images containing uncropped instances requires integrating it with a separately trained sequence detector or a bottom-up proposal generator. Non-maximum suppression is often performed to obtain the final detections. While combining separate components trained using different objective functions has been shown to be worse than end-to-end training of a single system in other domains, integrating object localization and recognition into a single globally-trainable architecture has been difficult.

In this work, we take inspiration from the way humans perform visual sequence recognition tasks such as reading by continually moving the fovea to the next relevant object or character, recognizing the individual object, and adding the recognized object to our internal representation of the sequence. Our proposed system is a deep recurrent neural network that at each step processes a multi-resolution crop of the input image, called a glimpse. The network uses information from the glimpse to update its internal representation of the input, and outputs the next glimpse location and possibly the next object in the sequence. The process continues until the model decides that there are no more objects to process. We show how the proposed system can be trained end-to-end by approximately maximizing a variational lower bound on the label sequence log-likelihood. This training procedure can be used to train the model to both localize and recognize multiple objects purely from label sequences.

We evaluate the model on the task of transcribing multi-digit house numbers from publicly available Google Street View imagery. Our attention-based model outperforms the state-of-the-art ConvNets on tightly cropped inputs while using both fewer parameters and much less computation. We also show that our model outperforms ConvNets by a much larger margin in the more realistic setting of larger and less tightly cropped input sequences.

2 Related work

Recognizing multiple objects in images has been one of the most important goals of computer vision. Perhaps the most common approach to image-based classification of character sequences involves combining a sliding window detector with a character classifier 

(Wang et al., 2012; Jaderberg et al., 2014b)

. The detector and the classifier are typically trained separately, using different loss functions. The seminal work on ConvNets of

LeCun et al. (1998)

introduced a graph transformer network architecture for recognizing a sequence of digits when reading checks, and also showed how the whole system could be trained end-to-end. That system however, still relied on a number of ad-hoc components for extracting candidate locations.

More recently, ConvNets operating on cropped sequences of characters have achieved state-of-the-art performance on house number recognition (Goodfellow et al., 2013) and natural scene text recognition (Jaderberg et al., 2014a). Goodfellow et al. (2013) trained a separate ConvNets classifier for each character position in a house number with all weights except for the output layer shared among the classifiers. Jaderberg et al. (2014a) showed that synthetically generated images of text can be used to train ConvNets classifiers that achieve state-of-the-art text recognition performance on real-world images of cropped text.

Our work builds on the long line of the previous attempts on attention-based visual processing (Itti et al., 1998; Larochelle & Hinton, 2010; Alexe et al., 2012)

, and in particular extends the recurrent attention model (RAM) proposed in 

Mnih et al. (2014). While RAM was shown to learn successful gaze strategies on cluttered digit classification tasks and on a toy visual control problem it was not shown to scale to real-world image tasks or multiple objects. Our approach of learning by maximizing variational lower bound is equivalent to the reinforcement learning procedure used in RAM and is related to the work of Maes et al. (2009) who showed how reinforcement learning can be used to tackle general structured prediction problems.


Figure 1: The deep recurrent attention model.

3 Deep recurrent visual attention model

For simplicity, we first describe how our model can be applied to classifying a single object and later show how it can be extended to multiple objects. Processing an image with an attention-based model is a sequential process with steps, where each step consists of a saccade followed by a glimpse. At each step , the model receives a location along with a glimpse observation taken at location . The model uses the observation to update its internal state and outputs the location to process at the next time-step. Usually the number of pixels in the glimpse is much smaller than the number of pixels in the original image , making the computational cost of processing a single glimpse independent of the size of the image.

A graphical representation of our model is shown in Figure 1

. The model can be broken down into a number of sub-components, each mapping some input into a vector output. We will use the term “network” to describe these non-linear sub-components since they are typically multi-layered neural networks.

Glimpse network: The glimpse network is a non-linear function that receives the current input image patch, or glimpse, and its location tuple , where , as input and outputs a vector . The job of the glimpse network is to extract a set of useful features from location of the raw visual input. We will use to denote the output vector from function that takes an image patch and is parameterized by weights . typically consists of three convolutional hidden layers without any pooling layers followed by a fully connected layer. Separately, the location tuple is mapped by using a fully connected hidden layer where, both and have the same dimension. We combine the high bandwidth image information with the low bandwidth location tuple by multiplying the two vectors element-wise to get the final glimpse feature vector ,

(1)

This type of multiplicative interaction between “what” and “where” was initially proposed by Larochelle & Hinton (2010).

Recurrent network: The recurrent network aggregates information extracted from the individual glimpses and combines the information in a coherent manner that preserves spatial information. The glimpse feature vector from the glimpse network is supplied as input to the recurrent network at each time step. The recurrent network consists of two recurrent layers with non-linear function . We defined the two outputs of the recurrent layers as and .

(2)

We use Long-Short-Term Memory units 

(Hochreiter & Schmidhuber, 1997) for the non-linearity because of their ability to learn long-range dependencies and stable learning dynamics.

Emission network: The emission network takes the current state of recurrent network as input and makes a prediction on where to extract the next image patch for the glimpse network. It acts as a controller that directs attention based on the current internal states from the recurrent network. It consists of a fully connected hidden layer that maps the feature vector from the top recurrent layer to a coordinate tuple .

(3)

Context network: The context network provides the initial state for the recurrent network and its output is used by the emission network to predict the location of the first glimpse. The context network takes a down-sampled low-resolution version of the whole input image and outputs a fixed length vector . The contextual information provides sensible hints on where the potentially interesting regions are in a given image. The context network employs three convolutional layers that map a coarse image to a feature vector used as the initial state of the top recurrent layer in the recurrent network. However, the bottom layer is initialized with a vector of zeros for reasons we will explain later.

Classification network: The classification network outputs a prediction for the class label based on the final feature vector of the lower recurrent layer. The classification network has one fully connected hidden layer and a softmax output layer for the class .

(4)

Ideally, the deep recurrent attention model should learn to look at locations that are relevant for classifying objects of interest. The existence of the contextual information, however, provides a “short cut” solution such that it is much easier for the model to learn from contextual information than by combining information from different glimpses. We prevent such undesirable behavior by connecting the context network and classification network to different recurrent layers in our deep model. As a result, the contextual information cannot be used directly by the classification network and only affects the sequence of glimpse locations produced by the model.

3.1 Learning where and what

Given the class labels of image , we can formulate learning as a supervised classification problem with the cross entropy objective function. The attention model predicts the class label conditioned on intermediate latent location variables from each glimpse and extracts the corresponding patches. We can thus maximize likelihood of the class label by marginalizing over the glimpse locations .

The marginalized objective function can be learned through optimizing its variational free energy lower bound :

(5)
(6)

The learning rules can be derived by taking derivatives of the above free energy with respect to the model parameter :

(7)
(8)

For each glimpse in the glimpse sequence, it is difficult to evaluate exponentially many glimpse locations during training. The summation in equation 8 can then be approximated using Monte Carlo samples.

(9)
(10)

The equation 10

gives a practical algorithm to train the deep attention model. Namely, we can sample the glimpse location prediction from the model after each glimpse. The samples are then used in the standard backpropagation to obtain an estimator for the gradient of the model parameters. Notice that log likelihood

has an unbounded range that can introduce substantial high variance in the gradient estimator. Especially when the sampled location is off from the object in the image, the log likelihood will induce an undesired large gradient update that is backpropagated through the rest of the model.

We can reduce the variance in the estimator 10 by replacing the with a 0/1 discrete indicator function and using a baseline technique used in Mnih et al. (2014).

(11)
(12)

As shown, the recurrent network state vector is used to estimate a state-based baseline

for each glimpse that significantly improve the learning efficiency. The baseline effectively centers the random variable

and can be learned by regressing towards the expected value of . Given both the indicator function and the baseline, we have the following gradient update:

(13)

where, hyper-parameter balances the scale of the two gradient components. In fact, by using the 0/1 indicator function, the learning rule from equation 13 is equivalent to the REINFORCE (Williams, 1992) learning rule employed in Mnih et al. (2014) for training their attention model. When viewed as a reinforcement learning update, the second term in equation 13

is an unbiased estimate of the gradient with respect to

of the expected reward under the model glimpse policy. Here we show that such learning rule can also be motivated by simply approximately optimizing the free energy.

During inference, the feedforward location prediction can be used as a deterministic prediction on the location coordinates to extract the next input image patch for the model. The model behaves as a normal feedforward network. Alternatively, our marginalized objective function equation 5 suggests a procedure to estimate the expected class prediction by using samples of location sequences and averaging their predictions,

(14)

This allows the attention model to be evaluated multiple times on each image with the classification predictions being averaged. In practice, we found that averaging the log probabilities gave the best performance.

In this paper, we encode the real valued glimpse location tuple using a Cartesian coordinate that is centered at the middle of the input image. The ratio converting unit width in the coordinate system to the number of pixels is a hyper-parameter. This ratio presents an exploration versus exploitation trade off. The proposed model performance is very sensitive to this setting. We found that setting its value to be around 15% of the input image width tends to work well.

3.2 Multi-object/Sequential classification as a visual attention task

Our proposed attention model can be easily extended to solve classification tasks involving multiple objects. To train the deep recurrent attention model for the sequential recognition task, the multiple object labels for a given image need to be cast into an ordered sequence . The deep recurrent attention model then learns to predict one object at a time as it explores the image in a sequential manner. We can utilize a simple fixed number of glimpses for each target in the sequence. In addition, a new class label for the “end-of-sequence” symbol is included to deal with variable numbers of objects in an image. We can stop the recurrent attention model once a terminal symbol is predicted. Concretely, the objective function for the sequential prediction is

(15)

The learning rule is derived as in equation 13 from the free energy and the gradient is accumulated across all targets. We assign a fixed number of glimpses, , for each target. Assuming targets in an image, the model would be trained with glimpses. The benefit of using a recurrent model for multiple object recognition is that it is a compact and simple form yet flexible enough to deal with images containing variable numbers of objects.

Learning a model from images of many objects is a challenging setup. We can reduce the difficulty by modifying our indicator function to be proportional to the number of targets the model predicted correctly.

(16)

In addition, we restrict the gradient of the objective function so that it only contains glimpses up to the first mislabeled target and ignores the targets after the first mistake. This curriculum-like adaption to the learning is crucial to obtain a high performance attention model for sequential prediction.

4 Experiments

To show the effectiveness of the deep recurrent attention model (DRAM), we first investigate a number of multi-object classification tasks involving a variant of MNIST. We then apply the proposed attention model to a real-world object recognition task using the multi-digit SVHN dataset Netzer et al. (2011) and compare with the state-of-the-art deep ConvNets. A description of the models and training protocols we used can be found in the Appendix.

As suggested in Mnih et al. (2014), classification performance can be improved by having a glimpse network with two different scales. Namely, given a glimpse location , we extract two patches where is the original patch and is a down-sampled coarser image patch. We use the concatenation of and as the glimpse observation. “foveal” feature.

The hyper-parameters in our experiments are the learning rate and the location variance in equation 9. They are determined by grid search and cross-validation.

4.1 Learning to find digits

We first evaluate the effectiveness of the controller in the deep recurrent attention model using the MNIST handwritten digit dataset.

We generated a dataset of pairs of randomly picked handwritten digits in a 100x100 image with distraction noise in the background. The task is to identify the 55 different combinations of the two digits as a classification problem. The attention models are allowed 4 glimpses before making a classification prediction. The goal of this experiment is to evaluate the ability of the controller and recurrent network to combine information from multiple glimpses with minimum effort from the glimpse network. The results are shown in table (3). The DRAM model with a context network significantly outperforms the other models.

Model Test Err.
RAM Mnih et al. (2014) 9%
DRAM w/o context 7%
DRAM 5%
Figure 2: Error rates on the MNIST pairs classification task.
Model Test Err.
ConvNet 64-64-64-512 3.2%
DRAM 2.5%
Figure 3: Error rates on the MNIST two digit addition task.

4.2 Learning to do addition

For a more challenging task, we designed another dataset with two MNIST digits on an empty 100x100 background where the task is to predict the sum of the two digits in the image as a classification problem with 19 targets. The model has to find where each digit is and add them up. When the two digits are sampled uniformly from all classes, the label distribution is heavily imbalanced for the summation where most of the probability mass concentrated around 10. Also, there are many digit combinations that can be mapped to the same target, for example, [5,5] and [3,7].

The class label provides a weaker association between the visual feature and supervision signal in this task than in the digit combination task. We used the same model as in the combination task. The deep recurrent attention model is able to discover a glimpse policy to solve this task achieving a 2.5% error rate. In comparison, the ConvNets take longer to learn and perform worse when given weak supervision.

Figure 4: Left) Two examples of the learned policy on the digit pair classification task. The first column shows the input image while the next 5 columns show the selected glimpse locations. Right) Two examples of the learned policy on the digit addition task. The first column shows the input image while the next 5 columns show the selected glimpse locations.

Some inference samples are shown in figure 4 It is surprising that the learned glimpses policy for predicting the next glimpse is very different in the addition task comparing to the predicting combination task. The model that learned to do addition toggles its glimpses between the two digits.

4.3 Learning to read house numbers

Model Test Err.
11 layer CNN Goodfellow et al. (2013) 3.96%
10 layer CNN 4.11%
Single DRAM 5.1%
Single DRAM MC avg. 4.4%
forward-backward DRAM MC avg. 3.9%
Figure 5: Whole sequence recognition error rates on multi-digit SVHN.
Model Test Err.
10 layer CNN resize 50%
10 layer CNN re-trained 5.60%
Single DRAM focus 5.7%
forward-backward DRAM focus 5.0%
Single DRAM fine-tuned 5.1%
forward-backward DRAM fine-tuning 4.46%
Figure 6: Whole sequence recognitionn error rate on enlarged multi-digit SVHN.

The publicly available multi-digit street view house number (SVHN) dataset Netzer et al. (2011) consists of images of digits taken from pictures of house fronts. Following Goodfellow et al. (2013), we formed a validation set of 5000 images by randomly sampling images from the training set and the extra set, and these were used for selecting the learning rate and sampling variance for the stochastic glimpse policy. The models are trained using the remaining 200,000 training images. We follow the preprocessing technique from Goodfellow et al. (2013) to generate tightly cropped 64 x 64 images with multi-digits at the center and similar data augmentation is used to create 54x54 jittered images during training. We also convert the RGB images to grayscale as we observe the color information does not affect the final classification performance.

We trained a model to classify all the digits in an image sequentially with the objective function defined in equation 15. The label sequence ordering is chosen to go from left to right as the natural ordering of the house number. The attention model is given 3 glimpses for each digit before making a prediction. The recurrent model keeps running until it predicts a terminal label or until the longest digit length in the dataset is reached. In the SVHN dataset, up to 5 digits can appear in an image. This means the recurrent model will run up to 18 glimpses per image, that is 5 x 3 plus 3 glimpses for a terminal label. Learning the attention model took around 3 days on a GPU.

The model performance is shown in table (6). We found that there is still a performance gap between the state-of-the-art deep ConvNet and a single DRAM that “reads” from left to right, even with the Monte Carlo averaging. The DRAM often over predicts additional digits in the place of the terminal class. In addition, the distribution of the leading digit in real-life follows Benford’s law.

We therefore train a second recurrent attention model to “read” the house numbers from right to left as a backward model. The forward and backward model can share the same weights for their glimpse networks but they have different weights for their recurrent and their emission networks. The predictions of both forward and backward models can be combined to estimate the final sequence prediction. Following the observation that attention models often overestimate the sequence length, we can flip first number of sequence prediction from the backwards model, where

is the shorter length of the sequence length prediction between the forward and backward model. This simple heuristic works very well in practice and we obtain state-of-the-art performance on the Street View house number dataset with the forward-backward recurrent attention model. Videos showing sample runs of the forward and backward models on SVHN test data can be found at

http://www.psi.toronto.edu/~jimmy/dram/forward.avi and http://www.psi.toronto.edu/~jimmy/dram/backward.avi respectively. These visualizations show that the attention model learns to follow the slope of multi-digit house numbers when they go up or down.

For comparison, we also implemented a deep ConvNet with a similar architecture to the one used in Goodfellow et al. (2013)

. The network had 8 convolutional layers with 128 filters in each followed by 2 fully connected layers of 3096 ReLU units. Dropout is applied to all 10 layers with 50% dropout rate to prevent over-fitting.

(Giga) floating-point op. 10 layer CNN DRAM DRAM MC avg. F-B DRAM MC avg.
54x54 2.1 0.2 0.35 0.7
110x110 8.5 0.2 1.1 2.2
param. (millions) 10 layer CNN DRAM DRAM MC avg. F-B DRAM MC avg.
54x54 51 14 14 28
110x110 169 14 14 28
Table 1: Computation cost of DRAM V.S. deep ConvNets

Moreover, we generate a less cropped 110x110 multi-digit SVHN dataset by enlarging the bounding box of each image such that the relative size of the digits stays the same as in the 54x54 images. Our deep attention model trained on 54x54 can be directly applied to the new 110x110 dataset with no modification. The performance can be further improved by “focusing” the model on where the digits are. We run the model once and crop a 54x54 bounding box around the glimpse location sequence and feed the 54x54 bounding box to the attention model again to generate the final prediction. This allows DRAM to “focus” and obtain a similar prediction accuracy on the enlarged images as on the cropped image without ever being trained on large images. We also compared the deep ConvNet trained on the 110x110 images with the fine tuned attention model. The deep attention model significantly outperforms the deep ConvNet with very little training time. The DRAM model only takes a few hours to fine-tune on the enlarged SVHN data, compared to one week for the deep 10 layer ConvNet.

5 Discussion

In our experiments, the proposed deep recurrent attention model (DRAM) outperforms the state-of-the-art deep ConvNets on the standard SVHN sequence recognition task. Moreover, as we increase the image area around the house numbers or lower the signal-to-noise ratio, the advantage of the attention model becomes more significant.

In table 1, we compare the computational cost of our proposed deep recurrent attention model with that of deep ConvNets in terms of the number of float-pointing operations for the multi-digit SVHN models along with the number of parameters in each model. The recurrent attention models that only process a selected subset of the input scales better than a ConvNet that looks over an entire image. The estimated cost for the DRAM is calculated using the maximum sequence length in the dataset, however the expected computational cost is much lower in practice since most of the house numbers are around digits long. In addition, since the attention based model does not process the whole image, it can naturally work on images of different size with the same computational cost independent of the input dimensionality.

We also found that the attention-based model is less prone to over-fitting than ConvNets, likely because of the stochasticity in the glimpse policy during training. Though it is still beneficial to regularize the attention model with some dropout noise between the hidden layers during training, we found that it gives a very marginal performance boost of 0.1% on the multi-digit SVHN task. On the other hand, the deep 10 layer ConvNet is only able to achieve 5.5% error rate when dropout is only applied to the last two fully connected hidden layer.

Finally, we note that DRAM can easily deal with variable length label sequences. Moreover, a model trained on a dataset with a fixed sequence length can easily be transferred and fine tuned with a similar dataset but longer target sequences. This is especially useful when there is lack of data for the task with longer sequences.

6 Conclusion

We described a novel computer vision model that uses an attention mechanism to decide where to focus its computation and showed how it can be trained end-to-end to sequentially classify multiple objects in an image. The model outperformed the state-of-the-art ConvNets on a multi-digit house number recognition task while using both fewer parameters and less computation than the best ConvNets, thereby showing that attention mechanisms can improve both the accuracy and efficiency of ConvNets on a real-world task. Since our proposed deep recurrent attention model is flexible, powerful, and efficient, we believe that it may be a promising approach for tackling other challenging computer vision tasks.

7 Acknowledgements

We would like to thank Geoffrey Hinton, Nando de Freitas and Chris Summerfield for many helpful comments and discussions. We would also like to thank the developers of DistBelief (Dean et al., 2012a).

References

8 Appendix

8.1 General Training Details

We used the ReLU activation function in the hidden layers,

, for the rest of the results reported here or otherwise noted. We found that ReLU units significantly speed up training. We optimized the model parameters using stochastic gradient descent with the Nesterov momentum technique. A mini-batch size of 128 was used to estimate the gradient direction. The momentum coefficient was set to

throughout the training. The learning rate scheduling was applied in training to improve the convergence of the learning process. starts at

in the first epoch and was exponentially reduced by a factor of

after each epoch.

8.2 Details of Learning to Find Digits

The unit width for the Cartesian coordinates was set to 20 and glimpse location sampling standard deviation was set to 0.03. There are 512 LSTM units and 256 hidden units in each fully connected layer of the model. We intentionally used a simple fully connected single hidden layer network of 256 hidden units as

in the glimpse network.

8.3 Details of Learning to Read House Numbers

Unlike in the MNIST experiment, the number of digits in each image varies and digits have more variations due to natural backgrounds, lighting variation, and highly variable resolution. We use a much larger deep recurrent attention model for this task. It was crucial to have a powerful glimpse network to obtain good performance. As described in section 3, the glimpse network consists of three convolutional layers with 5x5 filter kernels in the first layer and 3x3 in the later two. The number of filters in those layers was {64, 64, 128}.There are 512 LSTM units in each layer of the recurrent network. Also, the fully connected hidden layers all have 1024 ReLU hidden units in each module listed in section 3. The Cartesian coordinate unit width was set to 12 pixels and glimpse location is sampled from a fixed variance of 0.03.

Model Test Err.
small DRAM 5.1%
small DRAM + dropout 4.6%
Table 2: Effectiveness of Regularization