Knowing where to look plays an important role in people’s ability to learn and solve new tasks quickly. While some cues in an image are naturally attractive and lead to bottom-up saliency [(Harel et al., 2007), (Walther & Koch, 2006)], others need voluntary effort and are more task-dependent, leading to top-down saliency [(Sprague & Ballard, 2004) (Borji et al., 2011)
]. When humans perform a specific task, a combined model of attentional selection and object recognition is usually at work. Bottom-up feature extraction coupled with a hierarchical representation of object classes and motor commands governs subsequent eye movements in order to maximize information gain(Itti & Koch, 2001).
The human attention mechanism is very complicated and depends on various factors ranging from task complexity, the nature of the task, external factors such as rewards or distractors, and internal factors such as curiosity. The work of (Triesch et al., 2003) concluded that what we see is highly dependent on what we need. Human visual attention can also be seen as relying on a hierarchical approach (Baylis & Driver, 1993). In particular, when performing a complex task which involves various subgoals, humans use selective attention to parts of the visual scene, sequential deployment of gaze in a temporal sequence of frames, before performing motor actions. More importantly, human attention reuses past understanding of concepts, relations, and world models.
Intuitively, the fact that people focus only on specific parts of an image before acting should lead both to robustness in the presence of noise and deliberate distractors, as well as to the ability to generalize knowledge over different tasks. For example, we can navigate through any building regardless of the color of the walls or the interior decor. Hence, we would like to investigate if using this mechanism also provides robustness and ability to transfer knowledge for reinforcement learning (RL) agents as well.
Our goal is to explore how foveating around the regions where humans look impacts the reinforcement learning process, especially focusing on robustness and continual learning. Because of this goal, we build on top of the UNREAL agent (Jaderberg et al., 2016), which aims to construct a better representation for continual learning, by focusing not only on learning the optimal value function for the given task, but also on optimizing several pseudo-rewards or auxiliary tasks. We investigate the impact of overlaying the real image with a mask that is determined by a model of human attention. We use the spectral residual saliency method (Hou & Zhang, 2007) to foveate around salient regions and train the UNREAL agents on a maze navigation task from DeepMind Lab (Beattie et al., 2016). We use varying degrees of foveation, in order to evaluate the impact on the learning process. Our hypothesis was that more foveation should lead to more robustness to distractors and noise, but also to worse final task performance. We also empirically explore if knowing where to look facilitates continual learning and leads learnt policies to be robust to variations in the data distribution.
2 Algorithmic approach
We started our approach by investigating saliency maps generated from the state-of-the-art Saliency Attentive Model (SAM)(Cornia et al., 2016) according to the MIT Saliency Benchmark (Bylinskii et al., 2015). Figure 1 shows a sample input image from a static maze navigation task overlaid with a heat map generated from SAM. SAM uses a Convolutional LSTM to focus on specific parts of the image and iteratively refines the visual attention. Once a gray scale saliency map is generated from SAM, we overlay it on the original image using jet color map. More salient regions in the image are indicated by the hotness of the map i.e. the red color, whereas relatively insignificant regions are indicated by coolness of the map i.e. the blue color. While the saliency maps generated by SAM look very intuitive, using a SAM model pre-trained on the VGG dataset is computationally very expensive in terms of speed of training. For faster training, instead of SAM, we decided to use a real-time saliency computation technique called the Spectral Residual method (Hou & Zhang, 2007). The key idea of this method is to compute the average frequency domain and subtract it from a specific image domain to obtain the spectral residual. The log spectrum of each image is analyzed to obtain the spectral residual, then it is transformed to a spatial domain with the location of the proto-objects. Proto-objects are pre-attentive structures with limited spatial and temporal coherence within a visual stimuli, which generate the perception of an object when attended to.
We first explore if foveating around the salient locations in the image helps the agent to learn faster. It is natural for humans to look at an entire visual scene, yet, automatically focus around salient regions while eliminating others which are not so important. With this intuition, instead of explicitly providing the attention map along with the original image, we blend the attention map with the original image, as follows:
where is the normalized saliency map for all pixels , denotes the original image, and is the amount of foveation, and controls the amount of blending desired. This is also depicted in Figure 2 for ranging from to . For instance, a value of indicates removing all distractors and focusing on salient regions alone (Figure 1(a)), whereas a value of implies looking largely at the original image.
We train the UNREAL agent on DeepMind Lab’s (Beattie et al., 2016) static navigation maze task (nav maze static ) with all auxiliary tasks on as our baseline. We keep the network architecture consistent with the Jaderberg et al., with a CNN-LSTM base agent trained on-policy with A3C (Mnih et al., 2016). The input to the agent at each timestep was an * RGB image. The network consists of two convolutional layers with and filters respectively. This is followed by a fully connected layer with
units. RELU activation function is used for all three layers. An LSTM is used with the inputs concatenated from the fully connected layer, previous action taken, and previous reward. Three auxiliary tasks include the pixel-control task, value-function replay and the reward prediction task as described in(Jaderberg et al., 2016). We use timestep rollouts for the base process. After every environment steps, the auxiliary tasks are performed corresponding to every update of the base A3C agent. We used the online open source-code of UNREAL111https://github.com/miyosuda/unreal as our baseline.
Next, we introduce the Visually-Attentive UNREAL agent 222The source code is available at https://github.com/kkhetarpal/unrealwithattention by foveating around the salient regions in each image. This is done in the base process of online A3C , as shown in the pseudo code in Algorithm 1.
The training used parallel threads for all our experiments. For our preliminary experiments, we explored different values of . We can observe that on one hand, foveating on the salient regions alone removes a lot of context from the important aspects of an image and results in little to no learning, as seen in the Figure 3. This is also intuitive from the visualization in the Figure 1(a). On the other hand, values of in the range of show a boost in performance in the preliminary learning curves, as shown in Figure 3.
. Scores here are averaged for 25 games with standard deviation across these games in the brackets.
Based on our preliminary results, we further trained the Visually-attentive UNREAL agents only using the value of , which showed a boost in performance in Figure 3. We ran multiple runs for both the baseline and visually-attentive agent. Figure 4 shows the learning curves for time steps averaged across runs. The Visually-Attentive UNREAL agent learns marginally slower than the baseline on an average. Moreover, the amount of foveation determines the impact on the learning. However, the learning curve only suggests how these agents perform in the same environment over time. Next we explore, how visually-attentive agents compare to the baseline in transfer of learning. In other words, does visual attention facilitate continual learning?.
To evaluate the trained models for continual learning, we introduce three types of perturbations in the input frames and the average performance over k=25 games is recorded. Table 1 depicts the performance averaged over games for both these agents. These variations include addition of Gaussian noise, tinting of images at random with the same hue, and tinting of images at random with different hues, categorized as three levels of difficulty namely easy, medium and hard. To tint the frames, we generate a flickering effect in the sequence of frames by scaling RGB values and by adjusting colors in the HSV color-space. From the mean scores in Table 1 one can note that both baseline and the visually attentive UNREAL agent remain unaffected in performance by relatively small amounts of Gaussian noise. Upon encountering flickering in frames at random, the visually-attentive UNREAL agent is still able to perform as well as the baseline and is relatively more robust to distractors in both easy and moderate categories of evaluation. However, both agents struggle to perform transfer learning when the amount of distraction is larger than what they have seen during training. For a qualitative analysis, we present the visualization of both these agents in all three test-scenarios as additional results in the supplementary material333https://sites.google.com/view/attendbeforeyouact.
4 Discussion and Future Work
We present an exploratory study to understand the role of visual attention in learning to perform a task and evaluating its effect in continual learning. Our key hypothesis is that knowing where to look in an image helps in learning a task, because this knowledge could be transferred to new tasks. We train the visually-attentive UNREAL agent which foveates around regions of an image salient to the human eye. The performance evaluation on perturbations in the train setting demonstrate promising results for further analysis of continual learning with visual attention.
In this work, we employed a fundamental spectral residual saliency method which is based on the log spectra representation of images. However, this technique does not take into account the motion features which could be a limiting factor in terms of performance of the visually-attentive agent. This was further confirmed by qualitative analysis of the attention maps generated by the spectral residual saliency method as shown in the Figure 5. It is interesting to note that this model focuses a lot more on the score region of the frame than the objects in the maze. One of the potential reasons for limited performance is that computed attention maps focus on one most important object in the frame as opposed to all salient regions. We note that our approach can be used as a wrapper around any saliency model, so it would be easy to try better approaches.
A possible future direction in understanding the role of attention could involve training saliency models explicitly for images encountered in game playing. Even using pre-trained SAM model in an optimized fashion would potentially impact the performance. One could employ a better saliency model to help the agent foveate on regions which capture the dynamics of the rewards and the feature-representation. More importantly, it would be interesting to study a setting where the agent can actively learn to control where to attend to, rather than using a static attention model.Jayaraman & Grauman ’s work in learning object representation in a dynamic interactive setting relates to similar line of thought. Thus, an open question remains: how can we ensure that an agent controls the visit to the most visually attended states?
- Baylis & Driver (1993) Baylis, Gordon C and Driver, Jon. Visual attention and objects: evidence for hierarchical coding of location. Journal of Experimental Psychology: Human Perception and Performance, 19(3):451, 1993.
- Beattie et al. (2016) Beattie, Charles, Leibo, Joel Z, Teplyashin, Denis, Ward, Tom, Wainwright, Marcus, Küttler, Heinrich, Lefrancq, Andrew, Green, Simon, Valdés, Víctor, Sadik, Amir, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
- Borji et al. (2011) Borji, Ali, Ahmadabadi, Majid N, and Araabi, Babak N. Cost-sensitive learning of top-down modulation for attentional control. Machine Vision and Applications, 22(1):61–76, 2011.
- Bylinskii et al. (2015) Bylinskii, Zoya, Judd, Tilke, Borji, Ali, Itti, Laurent, Durand, Frédo, Oliva, Aude, and Torralba, Antonio. Mit saliency benchmark, 2015.
- Cornia et al. (2016) Cornia, Marcella, Baraldi, Lorenzo, Serra, Giuseppe, and Cucchiara, Rita. Predicting human eye fixations via an lstm-based saliency attentive model. arXiv preprint arXiv:1611.09571, 2016.
- Harel et al. (2007) Harel, Jonathan, Koch, Christof, and Perona, Pietro. Graph-based visual saliency. In Advances in neural information processing systems, pp. 545–552, 2007.
- Hou & Zhang (2007) Hou, Xiaodi and Zhang, Liqing. Saliency detection: A spectral residual approach. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8. IEEE, 2007.
- Itti & Koch (2001) Itti, Laurent and Koch, Christof. Computational modelling of visual attention. Nature reviews neuroscience, 2(3):194, 2001.
- Jaderberg et al. (2016) Jaderberg, Max, Mnih, Volodymyr, Czarnecki, Wojciech Marian, Schaul, Tom, Leibo, Joel Z, Silver, David, and Kavukcuoglu, Koray. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.
- Jayaraman & Grauman (2016) Jayaraman, Dinesh and Grauman, Kristen. Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion. In European Conference on Computer Vision, pp. 489–505. Springer, 2016.
- Mnih et al. (2016) Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928–1937, 2016.
- Sprague & Ballard (2004) Sprague, Nathan and Ballard, Dana. Eye movements for reward maximization. In Advances in neural information processing systems, pp. 1467–1474, 2004.
- Triesch et al. (2003) Triesch, Jochen, Ballard, Dana H, Hayhoe, Mary M, and Sullivan, Brian T. What you see is what you need. Journal of vision, 3(1):9–9, 2003.
- Walther & Koch (2006) Walther, Dirk and Koch, Christof. Modeling attention to salient proto-objects. Neural networks, 19(9):1395–1407, 2006.