Understanding and Visualizing Deep Visual Saliency Models

03/06/2019
by   Sen He, et al.
0

Recently, data-driven deep saliency models have achieved high performance and have outperformed classical saliency models, as demonstrated by results on datasets such as the MIT300 and SALICON. Yet, there remains a large gap between the performance of these models and the inter-human baseline. Some outstanding questions include what have these models learned, how and where they fail, and how they can be improved. This article attempts to answer these questions by analyzing the representations learned by individual neurons located at the intermediate layers of deep saliency models. To this end, we follow the steps of existing deep saliency models, that is borrowing a pre-trained model of object recognition to encode the visual features and learning a decoder to infer the saliency. We consider two cases when the encoder is used as a fixed feature extractor and when it is fine-tuned, and compare the inner representations of the network. To study how the learned representations depend on the task, we fine-tune the same network using the same image set but for two different tasks: saliency prediction versus scene classification. Our analyses reveal that: 1) some visual regions (e.g. head, text, symbol, vehicle) are already encoded within various layers of the network pre-trained for object recognition, 2) using modern datasets, we find that fine-tuning pre-trained models for saliency prediction makes them favor some categories (e.g. head) over some others (e.g. text), 3) although deep models of saliency outperform classical models on natural images, the converse is true for synthetic stimuli (e.g. pop-out search arrays), an evidence of significant difference between human and data-driven saliency models, and 4) we confirm that, after-fine tuning, the change in inner-representations is mostly due to the task and not the domain shift in the data.

READ FULL TEXT

page 3

page 5

page 6

page 7

page 8

research
04/24/2020

How fine can fine-tuning be? Learning efficient language models

State-of-the-art performance on language understanding tasks is now achi...
research
10/05/2016

DeepGaze II: Reading fixations from deep features trained on object recognition

Here we present DeepGaze II, a model that predicts where people look in ...
research
01/12/2018

Deep saliency: What is learnt by a deep network about saliency?

Deep convolutional neural networks have achieved impressive performance ...
research
03/29/2020

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

Pre-trained sequence-to-sequence (seq-to-seq) models have significantly ...
research
11/29/2017

Saliency Weighted Convolutional Features for Instance Search

This work explores attention models to weight the contribution of local ...
research
02/18/2019

Contextual Encoder-Decoder Network for Visual Saliency Prediction

Predicting salient regions in natural images requires the detection of o...
research
03/07/2019

Ultrasound Image Representation Learning by Modeling Sonographer Visual Attention

Image representations are commonly learned from class labels, which are ...

Please sign up or login with your details

Forgot password? Click here to reset