An Embarrassingly Simple Approach to Training Ternary Weight Networks

11/01/2020
by   Xiang Deng, et al.
0

Deep neural networks (DNNs) have achieved great successes in various domains of artificial intelligence, but they require large amounts of memory and computational power. This severely restricts their implementation on resource-limited hardware. One approach to solving this problem is to train DNNs with ternary weights {-1, 0, +1}, thus avoiding multiplications and dramatically reducing the memory and computation requirements. However, the existing approaches to training ternary weight networks either have a large performance gap to the full precision counterparts or have a complex training process, which makes ternary weight networks not widely used. In this paper, we propose an embarrassingly simple approach (ESA) to training ternary weight networks. Specifically, ESA first parameterizes the weights W in a DNN with tanh(Θ) where Θ are the parameters, so that the weight values are limited in the range between -1 and +1, and then a weight discretization regularization (WDR) is used to force the weights to be ternary. Consequently, ESA has an extremely high code reuse rate when converting a full precision weight DNN to the ternary version. More importantly, ESA is able to control the sparsity (i.e., the percentage of 0s) of the ternary weights through a controller α in WDR. We theoretically and empirically show that the sparsity of the trained ternary weights is positively related to α. To the best of our knowledge, ESA is the first sparsity-controlling approach to training ternary weight networks. Extensive experiments on several benchmark datasets demonstrate that ESA beats the state-of-the-art approaches significantly and matches the performances of the full precision weight networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset