SSBNet: Improving Visual Recognition Efficiency by Adaptive Sampling

07/23/2022
by   Ho Man Kwan, et al.
7

Downsampling is widely adopted to achieve a good trade-off between accuracy and latency for visual recognition. Unfortunately, the commonly used pooling layers are not learned, and thus cannot preserve important information. As another dimension reduction method, adaptive sampling weights and processes regions that are relevant to the task, and is thus able to better preserve useful information. However, the use of adaptive sampling has been limited to certain layers. In this paper, we show that using adaptive sampling in the building blocks of a deep neural network can improve its efficiency. In particular, we propose SSBNet which is built by inserting sampling layers repeatedly into existing networks like ResNet. Experiment results show that the proposed SSBNet can achieve competitive image classification and object detection performance on ImageNet and COCO datasets. For example, the SSB-ResNet-RS-200 achieved 82.6 higher than the baseline ResNet-RS-152 with a similar complexity. Visualization shows the advantage of SSBNet in allowing different layers to focus on different positions, and ablation studies further validate the advantage of adaptive sampling over uniform methods.

READ FULL TEXT

page 11

page 22

page 23

page 24

page 25

page 26

research
04/10/2020

Improved Residual Networks for Image and Video Recognition

Residual networks (ResNets) represent a powerful type of convolutional n...
research
11/13/2018

Data Driven Governing Equations Approximation Using Deep Neural Networks

We present a numerical framework for approximating unknown governing equ...
research
04/20/2019

Data-Driven Neuron Allocation for Scale Aggregation Networks

Successful visual recognition networks benefit from aggregating informat...
research
04/05/2018

Learning Strict Identity Mappings in Deep Residual Networks

A family of super deep networks, referred to as residual networks or Res...
research
09/15/2020

ResNet-like Architecture with Low Hardware Requirements

One of the most computationally intensive parts in modern recognition sy...
research
07/15/2020

Finding Non-Uniform Quantization Schemes usingMulti-Task Gaussian Processes

We propose a novel method for neural network quantization that casts the...
research
02/07/2017

An Implementation of Faster RCNN with Study for Region Sampling

We adapted the join-training scheme of Faster RCNN framework from Caffe ...

Please sign up or login with your details

Forgot password? Click here to reset