TranSalNet: Towards perceptually relevant visual saliency prediction

10/07/2021
by   Jianxun Lou, et al.
0

Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models. The source code of our proposed saliency model TranSalNet is available at: https://github.com/LJOVO/TranSalNet

READ FULL TEXT
research
03/22/2021

Transformers Solve the Limited Receptive Field for Monocular Depth Prediction

While convolutional neural networks have shown a tremendous impact on va...
research
08/14/2023

Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Vision transformers are effective deep learning models for vision tasks,...
research
05/08/2022

SparseTT: Visual Tracking with Sparse Transformers

Transformers have been successfully applied to the visual tracking task ...
research
12/07/2021

SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

Feed-forward only convolutional neural networks (CNNs) may ignore intrin...
research
11/23/2022

GhostNetV2: Enhance Cheap Operation with Long-Range Attention

Light-weight convolutional neural networks (CNNs) are specially designed...
research
08/25/2020

FastSal: a Computationally Efficient Network for Visual Saliency Prediction

This paper focuses on the problem of visual saliency prediction, predict...
research
08/19/2021

Causal Attention for Unbiased Visual Recognition

Attention module does not always help deep models learn causal features ...

Please sign up or login with your details

Forgot password? Click here to reset