Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

01/16/2013
by   Matthew D. Zeiler, et al.
0

We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.

READ FULL TEXT

page 4

page 6

page 9

research
12/01/2015

Towards Dropout Training for Convolutional Neural Networks

Recently, dropout has seen increasing use in deep learning. For deep con...
research
01/24/2020

Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods

Convolutional neural networks have been achieving the best possible accu...
research
12/04/2015

Max-Pooling Dropout for Regularization of Convolutional Neural Networks

Recently, dropout has seen increasing use in deep learning. For deep con...
research
03/04/2019

Data Augmentation for Drum Transcription with Convolutional Neural Networks

A recurrent issue in deep learning is the scarcity of data, in particula...
research
02/22/2020

Stochasticity in Neural ODEs: An Empirical Study

Stochastic regularization of neural networks (e.g. dropout) is a wide-sp...
research
07/02/2020

Learning ordered pooling weights in image classification

Spatial pooling is an important step in computer vision systems like Con...
research
11/16/2016

S3Pool: Pooling with Stochastic Spatial Sampling

Feature pooling layers (e.g., max pooling) in convolutional neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset