An Embarrassingly Simple Approach to Training Ternary Weight Networks

11/01/2020
by   Xiang Deng, et al.
0

Deep neural networks (DNNs) have achieved great successes in various domains of artificial intelligence, but they require large amounts of memory and computational power. This severely restricts their implementation on resource-limited hardware. One approach to solving this problem is to train DNNs with ternary weights {-1, 0, +1}, thus avoiding multiplications and dramatically reducing the memory and computation requirements. However, the existing approaches to training ternary weight networks either have a large performance gap to the full precision counterparts or have a complex training process, which makes ternary weight networks not widely used. In this paper, we propose an embarrassingly simple approach (ESA) to training ternary weight networks. Specifically, ESA first parameterizes the weights W in a DNN with tanh(Θ) where Θ are the parameters, so that the weight values are limited in the range between -1 and +1, and then a weight discretization regularization (WDR) is used to force the weights to be ternary. Consequently, ESA has an extremely high code reuse rate when converting a full precision weight DNN to the ternary version. More importantly, ESA is able to control the sparsity (i.e., the percentage of 0s) of the ternary weights through a controller α in WDR. We theoretically and empirically show that the sparsity of the trained ternary weights is positively related to α. To the best of our knowledge, ESA is the first sparsity-controlling approach to training ternary weight networks. Extensive experiments on several benchmark datasets demonstrate that ESA beats the state-of-the-art approaches significantly and matches the performances of the full precision weight networks.

READ FULL TEXT

page 1

page 9

research
09/06/2020

TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training

Emerging intelligent embedded devices rely on Deep Neural Networks (DNNs...
research
12/19/2018

Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks

In recent years, deep neural networks (DNNs) have been applied to variou...
research
01/04/2019

Transformed ℓ_1 Regularization for Learning Sparse Deep Neural Networks

Deep neural networks (DNNs) have achieved extraordinary success in numer...
research
10/02/2018

Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation

In the past years, Deep convolution neural network has achieved great su...
research
12/24/2020

Learning with Retrospection

Deep neural networks have been successfully deployed in various domains ...
research
08/14/2023

HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization

Sparse neural networks are a key factor in developing resource-efficient...
research
05/25/2017

Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework

There is a pressing need to build an architecture that could subsume the...

Please sign up or login with your details

Forgot password? Click here to reset