A^2-Nets: Double Attention Networks

10/27/2018
by   Yunpeng Chen, et al.
0

Learning to capture long-range relations is fundamental to image/video recognition. Existing CNN models generally rely on increasing depth to model such relations which is highly inefficient. In this work, we propose the "double attention block", a novel component that aggregates and propagates informative global features from the entire spatio-temporal space of input images/videos, enabling subsequent convolution layers to access features from the entire space efficiently. The component is designed with a double attention mechanism in two steps, where the first step gathers features from the entire space into a compact set through second-order attention pooling and the second step adaptively selects and distributes features to each location via another attention. The proposed double attention block is easy to adopt and can be plugged into existing deep neural networks conveniently. We conduct extensive ablation studies and experiments on both image and video recognition tasks for evaluating its performance. On the image recognition task, a ResNet-50 equipped with our double attention blocks outperforms a much larger ResNet-152 architecture on ImageNet-1k dataset with over 40 and less FLOPs. On the action recognition task, our proposed model achieves the state-of-the-art results on the Kinetics and UCF-101 datasets with significantly higher efficiency than recent works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2017

Recurrent Residual Learning for Action Recognition

Action recognition is a fundamental problem in computer vision with a lo...
research
08/29/2018

Top-down Attention Recurrent VLAD Encoding for Action Recognition in Videos

Most recent approaches for action recognition from video leverage deep a...
research
07/30/2018

Multi-Fiber Networks for Video Recognition

In this paper, we aim to reduce the computational cost of spatio-tempora...
research
08/05/2021

Unifying Nonlocal Blocks for Neural Networks

The nonlocal-based blocks are designed for capturing long-range spatial-...
research
11/30/2018

Graph-Based Global Reasoning Networks

Globally modeling and reasoning over relations between regions can be be...
research
06/16/2021

Invertible Attention

Attention has been proved to be an efficient mechanism to capture long-r...
research
12/14/2020

TDAF: Top-Down Attention Framework for Vision Tasks

Human attention mechanisms often work in a top-down manner, yet it is no...

Please sign up or login with your details

Forgot password? Click here to reset