A Deeper Look at Power Normalizations

06/24/2018
by   Piotr Koniusz, et al.
0

Power Normalizations (PN) are very useful non-linear operators in the context of Bag-of-Words data representations as they tackle problems such as feature imbalance. In this paper, we reconsider these operators in the deep learning setup by introducing a novel layer that implements PN for non-linear pooling of feature maps. Specifically, by using a kernel formulation, our layer combines the feature vectors and their respective spatial locations in the feature maps produced by the last convolutional layer of CNN. Linearization of such a kernel results in a positive definite matrix capturing the second-order statistics of the feature vectors, to which PN operators are applied. We study two types of PN functions, namely (i) MaxExp and (ii) Gamma, addressing their role and meaning in the context of nonlinear pooling. We also provide a probabilistic interpretation of these operators and derive their surrogates with well-behaved gradients for end-to-end CNN learning. We apply our theory to practice by implementing the PN layer on a ResNet-50 model and showcase experiments on four benchmarks for fine-grained recognition, scene recognition, and material classification. Our results demonstrate state-of-the-art performance across all these tasks.

READ FULL TEXT
research
12/27/2020

Power Normalizations in Fine-grained Image, Few-shot Image and Graph Classification

Power Normalizations (PN) are useful non-linear operators which tackle f...
research
02/12/2022

Fuzzy Pooling

Convolutional Neural Networks (CNNs) are artificial learning systems typ...
research
11/10/2018

Power Normalizing Second-order Similarity Network for Few-shot Learning

Second- and higher-order statistics of data points have played an import...
research
01/19/2017

Higher-order Pooling of CNN Features via Kernel Linearization for Action Recognition

Most successful deep learning algorithms for action recognition extend m...
research
11/13/2018

Vehicle Re-identification Using Quadruple Directional Deep Learning Features

In order to resist the adverse effect of viewpoint variations for improv...
research
11/10/2015

TemplateNet for Depth-Based Object Instance Recognition

We present a novel deep architecture termed templateNet for depth based ...
research
03/23/2017

Is Second-order Information Helpful for Large-scale Visual Recognition?

By stacking layers of convolution and nonlinearity, convolutional networ...

Please sign up or login with your details

Forgot password? Click here to reset