Glance and Focus Networks for Dynamic Visual Recognition

01/09/2022
by   Gao Huang, et al.
15

Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models which process all the pixels with an equal amount of computation result in considerable redundancy in terms of time and space consumption. In this paper, we formulate the image recognition problem as a sequential coarse-to-fine feature learning process, mimicking the human visual system. Specifically, the proposed Glance and Focus Network (GFNet) first extracts a quick global representation of the input image at a low resolution scale, and then strategically attends to a series of salient (small) regions to learn finer features. The sequential process naturally facilitates adaptive inference at test time, as it can be terminated once the model is sufficiently confident about its prediction, avoiding further redundant computation. It is worth noting that the problem of locating discriminant regions in our model is formulated as a reinforcement learning task, thus requiring no additional manual annotations other than classification labels. GFNet is general and flexible as it is compatible with any off-the-shelf backbone models (such as MobileNets, EfficientNets and TSM), which can be conveniently deployed as the feature extractor. Extensive experiments on a variety of image classification and video recognition tasks and with various backbone models demonstrate the remarkable efficiency of our method. For example, it reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 1.3x without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.

READ FULL TEXT

page 1

page 5

page 10

page 13

research
10/11/2020

Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

The accuracy of deep convolutional neural networks (CNNs) generally impr...
research
05/07/2021

Adaptive Focus for Efficient Video Recognition

In this paper, we explore the spatial redundancy in video recognition wi...
research
08/06/2022

Frozen CLIP Models are Efficient Video Learners

Video recognition has been dominated by the end-to-end learning paradigm...
research
09/15/2021

PnP-DETR: Towards Efficient Visual Analysis with Transformers

Recently, DETR pioneered the solution of vision tasks with transformers,...
research
10/12/2022

Latency-aware Spatial-wise Dynamic Networks

Spatial-wise dynamic convolution has become a promising approach to impr...
research
06/07/2022

Localizing Semantic Patches for Accelerating Image Classification

Existing works often focus on reducing the architecture redundancy for a...
research
05/23/2018

SNIPER: Efficient Multi-Scale Training

We present SNIPER, an algorithm for performing efficient multi-scale tra...

Please sign up or login with your details

Forgot password? Click here to reset