DCANet: Learning Connected Attentions for Convolutional Neural Networks

07/09/2020
by   Xu Ma, et al.
9

While self-attention mechanism has shown promising results for many vision tasks, it only considers the current features at a time. We show that such a manner cannot take full advantage of the attention mechanism. In this paper, we present Deep Connected Attention Network (DCANet), a novel design that boosts attention modules in a CNN model without any modification of the internal structure. To achieve this, we interconnect adjacent attention blocks, making information flow among attention blocks possible. With DCANet, all attention blocks in a CNN model are trained jointly, which improves the ability of attention learning. Our DCANet is generic. It is not limited to a specific attention module or base network architecture. Experimental results on ImageNet and MS COCO benchmarks show that DCANet consistently outperforms the state-of-the-art attention modules with a minimal additional computational overhead in all test cases. All code and models are made publicly available.

READ FULL TEXT

page 2

page 12

research
09/16/2022

ConvFormer: Closing the Gap Between CNN and Vision Transformers

Vision transformers have shown excellent performance in computer vision ...
research
08/23/2019

Assessing Knee OA Severity with CNN attention-based end-to-end architectures

This work proposes a novel end-to-end convolutional neural network (CNN)...
research
01/16/2019

UAN: Unified Attention Network for Convolutional Neural Networks

We propose a new architecture that learns to attend to different Convolu...
research
05/31/2022

An Effective Fusion Method to Enhance the Robustness of CNN

With the development of technology rapidly, applications of convolutiona...
research
02/17/2023

Improving Transformer-based Networks With Locality For Automatic Speaker Verification

Recently, Transformer-based architectures have been explored for speaker...
research
02/13/2023

Enhancing Multivariate Time Series Classifiers through Self-Attention and Relative Positioning Infusion

Time Series Classification (TSC) is an important and challenging task fo...
research
11/10/2019

Two-Headed Monster And Crossed Co-Attention Networks

This paper presents some preliminary investigations of a new co-attentio...

Please sign up or login with your details

Forgot password? Click here to reset