CondenseNet V2: Sparse Feature Reactivation for Deep Networks

04/09/2021
by   Le Yang, et al.
0

Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency. The recent proposed CondenseNet has shown that this mechanism can be further improved if redundant features are removed. In this paper, we propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing. In the proposed network, named CondenseNetV2, each layer can simultaneously learn to 1) selectively reuse a set of most important features from preceding layers; and 2) actively update a set of preceding features to increase their utility for later layers. Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 7

page 8

page 9

page 11

research
01/08/2020

Convolutional Networks with Dense Connectivity

Recent work has shown that convolutional networks can be substantially d...
research
04/23/2015

Object Detection Networks on Convolutional Feature Maps

Most object detectors contain two important components: a feature extrac...
research
11/22/2018

Universal Approximation by a Slim Network with Sparse Shortcut Connections

Over recent years, deep learning has become a mainstream method in machi...
research
12/04/2017

Object Classification using Ensemble of Local and Deep Features

In this paper we propose an ensemble of local and deep features for obje...
research
06/05/2018

Exploring Feature Reuse in DenseNet Architectures

Densely Connected Convolutional Networks (DenseNets) have been shown to ...
research
10/02/2018

Multi-scale Convolution Aggregation and Stochastic Feature Reuse for DenseNets

Recently, Convolution Neural Networks (CNNs) obtained huge success in nu...
research
11/18/2013

From Maxout to Channel-Out: Encoding Information on Sparse Pathways

Motivated by an important insight from neural science, we propose a new ...

Please sign up or login with your details

Forgot password? Click here to reset