Neural networks behave as hash encoders: An empirical study

01/14/2021
by   Fengxiang He, et al.
20

The input space of a neural network with ReLU-like activations is partitioned into multiple linear regions, each corresponding to a specific activation pattern of the included ReLU-like activations. We demonstrate that this partition exhibits the following encoding properties across a variety of deep learning models: (1) determinism: almost every linear region contains at most one training example. We can therefore represent almost every training example by a unique activation pattern, which is parameterized by a neural code; and (2) categorization: according to the neural code, simple algorithms, such as K-Means, K-NN, and logistic regression, can achieve fairly good performance on both training and test data. These encoding properties surprisingly suggest that normal neural networks well-trained for classification behave as hash encoders without any extra efforts. In addition, the encoding properties exhibit variability in different scenarios. Further experiments demonstrate that model size, training time, training sample size, regularization, and label noise contribute in shaping the encoding properties, while the impacts of the first three are dominant. We then define an activation hash phase chart to represent the space expanded by model size, training time, training sample size, and the encoding properties, which is divided into three canonical regions: under-expressive regime, critically-expressive regime, and sufficiently-expressive regime. The source code package is available at <https://github.com/LeavesLei/activation-code>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2018

E-swish: Adjusting Activations to Different Network Depths

Activation functions have a notorious impact on neural networks on both ...
research
05/07/2019

Ensemble of Convolutional Neural Networks Trained with Different Activation Functions

Activation functions play a vital role in the training of Convolutional ...
research
10/15/2022

Reachable Polyhedral Marching (RPM): An Exact Analysis Tool for Deep-Learned Control Systems

We present a tool for computing exact forward and backward reachable set...
research
12/07/2021

Unsupervised Representation Learning via Neural Activation Coding

We present neural activation coding (NAC) as a novel approach for learni...
research
11/24/2020

Comparisons among different stochastic selection of activation layers for convolutional neural networks for healthcare

Classification of biological images is an important task with crucial ap...
research
11/03/2020

Towards a Universal Gating Network for Mixtures of Experts

The combination and aggregation of knowledge from multiple neural networ...
research
11/03/2021

Regularization by Misclassification in ReLU Neural Networks

We study the implicit bias of ReLU neural networks trained by a variant ...

Please sign up or login with your details

Forgot password? Click here to reset