Recurrent Convolution for Compact and Cost-Adjustable Neural Networks: An Empirical Study

02/26/2019
by   Zhendong Zhang, et al.
0

Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple steps, which is originally proposed to model time-space signals. We argue that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers. However, the performance of an RC network is not satisfactory if we directly unroll the same kernels multiple steps. We propose a simple yet effective variant which improves the RC networks: the batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. Moreover, we verify that RC can perform cost-adjustable inference which is achieved by varying its unrolling steps. We learn double independent BN layers for cost-adjustable RC networks, i.e. independent w.r.t both the unrolling steps of current cell and upstream cell. We provide insights on why the proposed method works successfully. Experiments on both image classification and image denoise demonstrate the effectiveness of our method.

READ FULL TEXT

page 3

page 7

page 8

research
06/10/2014

Deep Epitomic Convolutional Neural Networks

Deep convolutional neural networks have recently proven extremely compet...
research
04/22/2020

DyNet: Dynamic Convolution for Accelerating Convolutional Neural Networks

Convolution operator is the core of convolutional neural networks (CNNs)...
research
09/14/2020

Adaptive Convolution Kernel for Artificial Neural Networks

Many deep neural networks are built by using stacked convolutional layer...
research
06/03/2021

Stochastic Whitening Batch Normalization

Batch Normalization (BN) is a popular technique for training Deep Neural...
research
05/14/2019

3D Dense Separated Convolution Module for Volumetric Image Analysis

With the thriving of deep learning, 3D Convolutional Neural Networks hav...
research
05/14/2018

Unifying and Merging Well-trained Deep Neural Networks for Inference Stage

We propose a novel method to merge convolutional neural-nets for the inf...
research
12/18/2015

Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

Learning deeper convolutional neural networks becomes a tendency in rece...

Please sign up or login with your details

Forgot password? Click here to reset