Empirical Explorations in Training Networks with Discrete Activations

01/16/2018
by   Shumeet Baluja, et al.
0

We present extensive experiments training and testing hidden units in deep networks that emit only a predefined, static, number of discretized values. These units provide benefits in real-world deployment in systems in which memory and/or computation may be limited. Additionally, they are particularly well suited for use in large recurrent network models that require the maintenance of large amounts of internal state in memory. Surprisingly, we find that despite reducing the number of values that can be represented in the output activations from 2^32-2^64 to between 64 and 256, there is little to no degradation in network performance across a variety of different settings. We investigate simple classification and regression tasks, as well as memorization and compression problems. We compare the results with more standard activations, such as tanh and relu. Unlike previous discretization studies which often concentrate only on binary units, we examine the effects of varying the number of allowed activation levels. Compared to existing approaches for discretization, the approach presented here is both conceptually and programatically simple, has no stochastic component, and allows the training, testing, and usage phases to be treated in exactly the same manner.

READ FULL TEXT

page 11

page 17

research
10/16/2019

Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation

We study layered neural networks of rectified linear units (ReLU) in a m...
research
09/06/2020

Multi-Activation Hidden Units for Neural Networks with Random Weights

Single layer feedforward networks with random weights are successful in ...
research
05/25/2017

Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework

There is a pressing need to build an architecture that could subsume the...
research
08/03/2023

Memory capacity of two layer neural networks with smooth activations

Determining the memory capacity of two-layer neural networks with m hidd...
research
06/11/2014

Techniques for Learning Binary Stochastic Feedforward Neural Networks

Stochastic binary hidden units in a multi-layer perceptron (MLP) network...
research
11/27/2019

Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory

This paper introduces a new activation checkpointing method which allows...
research
09/12/2019

Generating Accurate Pseudo-labels via Hermite Polynomials for SSL Confidently

Rectified Linear Units (ReLUs) are among the most widely used activation...

Please sign up or login with your details

Forgot password? Click here to reset