Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree

09/30/2015
by   Chen-Yu Lee, et al.
0

We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training and a very modest increase in the number of model parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2021

Comparison of Methods Generalizing Max- and Average-Pooling

Max- and average-pooling are the most popular pooling methods for downsa...
research
11/02/2021

LogAvgExp Provides a Principled and Performant Global Pooling Operator

We seek to improve the pooling operation in neural networks, by applying...
research
09/03/2021

Ordinal Pooling

In the framework of convolutional neural networks, downsampling is often...
research
11/07/2013

Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

In this paper we propose and investigate a novel nonlinear unit, called ...
research
05/06/2020

Regularized Pooling

In convolutional neural networks (CNNs), pooling operations play importa...
research
08/15/2023

FeatGeNN: Improving Model Performance for Tabular Data with Correlation-based Feature Extraction

Automated Feature Engineering (AutoFE) has become an important task for ...
research
04/21/2016

TI-POOLING: transformation-invariant pooling for feature learning in Convolutional Neural Networks

In this paper we present a deep neural network topology that incorporate...

Please sign up or login with your details

Forgot password? Click here to reset