Exponential discretization of weights of neural network connections in pre-trained neural networks

02/03/2020
by   Magomed Yu. Malsagov, et al.
0

To reduce random access memory (RAM) requirements and to increase speed of recognition algorithms we consider a weight discretization problem for trained neural networks. We show that an exponential discretization is preferable to a linear discretization since it allows one to achieve the same accuracy when the number of bits is 1 or 2 less. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 69 discretization. The ResNet50 neural network shows top5 accuracy 84 Other neural networks perform fairly well at 5 bits (top5 accuracies of Xception, Inception-v3, and MobileNet-v2 top5 were 87 respectively). At less number of bits, the accuracy decreases rapidly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2021

Discretization of parameter identification in PDEs using Neural Networks

We consider the ill-posed inverse problem of identifying parameters in a...
research
12/15/2017

Lightweight Neural Networks

Most of the weights in a Lightweight Neural Network have a value of zero...
research
01/05/2023

StitchNet: Composing Neural Networks from Pre-Trained Fragments

We propose StitchNet, a novel neural network creation paradigm that stit...
research
02/19/2022

Bit-wise Training of Neural Network Weights

We introduce an algorithm where the individual bits representing the wei...
research
07/27/2020

Inception Neural Network for Complete Intersection Calabi-Yau 3-folds

We introduce a neural network inspired by Google's Inception model to co...
research
04/28/2021

On exact discretization of the L_2-norm with a negative weight

For a subspace X of functions from L_2 we consider the minimal number m ...
research
01/30/2023

Self-Compressing Neural Networks

This work focuses on reducing neural network size, which is a major driv...

Please sign up or login with your details

Forgot password? Click here to reset