Dense xUnit Networks

11/27/2018
by   Idan Kligvasser, et al.
0

Deep net architectures have constantly evolved over the past few years, leading to significant advancements in a wide array of computer vision tasks. However, besides high accuracy, many applications also require a low computational load and limited memory footprint. To date, efficiency has typically been achieved either by architectural choices at the macro level (e.g. using skip connections or pruning techniques) or modifications at the level of the individual layers (e.g. using depth-wise convolutions or channel shuffle operations). Interestingly, much less attention has been devoted to the role of the activation functions in constructing efficient nets. Recently, Kligvasser et al. showed that incorporating spatial connections within the activation functions, enables a significant boost in performance in image restoration tasks, at any given budget of parameters. However, the effectiveness of their xUnit module has only been tested on simple small models, which are not characteristic of those used in high-level vision tasks. In this paper, we adopt and improve the xUnit activation, show how it can be incorporated into the DenseNet architecture, and illustrate its high effectiveness for classification and image restoration tasks alike. While the DenseNet architecture is extremely efficient to begin with, our dense xUnit net (DxNet) can typically achieve the same performance with far fewer parameters. For example, on ImageNet, our DxNet outperforms a ReLU-based DenseNet having 30 parameters. Furthermore, in denoising and super-resolution, DxNet significantly improves upon all existing lightweight solutions, including the xUnit-based nets of Kligvasser et al.

READ FULL TEXT

page 3

page 6

page 7

research
11/17/2017

xUnit: Learning a Spatial Activation Function for Efficient Image Restoration

In recent years, deep neural networks (DNNs) achieved unprecedented perf...
research
04/01/2020

Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy

Data augmentation is an effective way to improve the performance of deep...
research
06/04/2020

Overcoming Overfitting and Large Weight Update Problem in Linear Rectifiers: Thresholded Exponential Rectified Linear Units

In past few years, linear rectified unit activation functions have shown...
research
10/19/2019

Image Restoration Using Deep Regulated Convolutional Networks

While the depth of convolutional neural networks has attracted substanti...
research
07/15/2020

Attention as Activation

Activation functions and attention mechanisms are typically treated as h...
research
03/08/2020

Π-nets: Deep Polynomial Neural Networks

Deep Convolutional Neural Networks (DCNNs) is currently the method of ch...
research
10/27/2019

L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained Visual Categorization

Deep neural networks paved the way for significant improvements in image...

Please sign up or login with your details

Forgot password? Click here to reset