Rethinking Layer-wise Feature Amounts in Convolutional Neural Network Architectures

12/14/2018
by   Martin Mundt, et al.
0

We characterize convolutional neural networks with respect to the relative amount of features per layer. Using a skew normal distribution as a parametrized framework, we investigate the common assumption of monotonously increasing feature-counts with higher layers of architecture designs. Our evaluation on models with VGG-type layers on the MNIST, Fashion-MNIST and CIFAR-10 image classification benchmarks provides evidence that motivates rethinking of our common assumption: architectures that favor larger early layers seem to yield better accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2019

RecNets: Channel-wise Recurrent Convolutional Neural Networks

In this paper, we introduce Channel-wise recurrent convolutional neural ...
research
01/28/2019

Convolutional Neural Networks with Layer Reuse

A convolutional layer in a Convolutional Neural Network (CNN) consists o...
research
06/02/2016

Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks

In visual recognition tasks, such as image classification, unsupervised ...
research
02/16/2022

Measuring Unintended Memorisation of Unique Private Features in Neural Networks

Neural networks pose a privacy risk to training data due to their propen...
research
04/28/2021

Filter Distribution Templates in Convolutional Networks for Image Classification Tasks

Neural network designers have reached progressive accuracy by increasing...
research
09/18/2014

Deeply-Supervised Nets

Our proposed deeply-supervised nets (DSN) method simultaneously minimize...
research
08/16/2018

BlockQNN: Efficient Block-wise Neural Network Architecture Generation

Convolutional neural networks have gained a remarkable success in comput...

Please sign up or login with your details

Forgot password? Click here to reset