Exponential discretization of weights of neural network connections in pre-trained neural networks
To reduce random access memory (RAM) requirements and to increase speed of recognition algorithms we consider a weight discretization problem for trained neural networks. We show that an exponential discretization is preferable to a linear discretization since it allows one to achieve the same accuracy when the number of bits is 1 or 2 less. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 69 discretization. The ResNet50 neural network shows top5 accuracy 84 Other neural networks perform fairly well at 5 bits (top5 accuracies of Xception, Inception-v3, and MobileNet-v2 top5 were 87 respectively). At less number of bits, the accuracy decreases rapidly.
READ FULL TEXT