Systematic evaluation of CNN advances on the ImageNet

06/07/2016
by   Dmytro Mishkin, et al.
0

The paper systematically studies the impact of a range of recent advances in CNN architectures and learning methods on the object categorization (ILSVRC) problem. The evalution tests the influence of the following choices of the architecture: non-linearity (ReLU, ELU, maxout, compatibility with batch normalization), pooling variants (stochastic, max, average, mixed), network width, classifier design (convolutional, fully-connected, SPP), image pre-processing, and of learning parameters: learning rate, batch size, cleanliness of the data, etc. The performance gains of the proposed modifications are first tested individually and then in combination. The sum of individual gains is bigger than the observed improvement when all modifications are introduced, but the "deficit" is small suggesting independence of their benefits. We show that the use of 128x128 pixel images is sufficient to make qualitative conclusions about optimal network structure that hold for the full size Caffe and VGG nets. The results are obtained an order of magnitude faster than with the standard 224 pixel images.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2019

The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study

We investigate how the final parameters found by stochastic gradient des...
research
02/17/2023

On Equivalent Optimization of Machine Learning Methods

At the core of many machine learning methods resides an iterative optimi...
research
01/15/2021

Dynamic Normalization

Batch Normalization has become one of the essential components in CNN. I...
research
02/26/2018

Yedrouj-Net: An efficient CNN for spatial steganalysis

For about 10 years, detecting the presence of a secret message hidden in...
research
03/06/2019

Mean-field Analysis of Batch Normalization

Batch Normalization (BatchNorm) is an extremely useful component of mode...
research
03/27/2023

Sigmoid Loss for Language Image Pre-Training

We propose a simple pairwise sigmoid loss for image-text pre-training. U...

Please sign up or login with your details

Forgot password? Click here to reset