On the Universal Approximability of Quantized ReLU Neural Networks

02/10/2018
by   Yukun Ding, et al.
0

Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks. Then we provide upper bounds of storage size given the approximation error bound and the bit-width of weights for function-independent and function-dependent structures. To the best of the authors' knowledge, this is the first work on the universal approximability as well as the associated storage size bound of quantized neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2022

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

We consider general approximation families encompassing ReLU neural netw...
research
12/16/2021

Approximation of functions with one-bit neural networks

This paper examines the approximation capabilities of coarsely quantized...
research
10/06/2021

VC dimension of partially quantized neural networks in the overparametrized regime

Vapnik-Chervonenkis (VC) theory has so far been unable to explain the sm...
research
01/25/2022

Bit-serial Weight Pools: Compression and Arbitrary Precision Execution of Neural Networks on Resource Constrained Processors

Applications of neural networks on edge systems have proliferated in rec...
research
07/22/2022

Quantized Sparse Weight Decomposition for Neural Network Compression

In this paper, we introduce a novel method of neural network weight comp...
research
10/07/2021

On the Optimal Memorization Power of ReLU Neural Networks

We study the memorization power of feedforward ReLU neural networks. We ...
research
03/25/2015

Quantized Nonparametric Estimation over Sobolev Ellipsoids

We formulate the notion of minimax estimation under storage or communica...

Please sign up or login with your details

Forgot password? Click here to reset