QUENN: QUantization Engine for low-power Neural Networks

11/14/2018
by   Miguel de Prado, et al.
0

Deep Learning is moving to edge devices, ushering in a new age of distributed Artificial Intelligence (AI). The high demand of computational resources required by deep neural networks may be alleviated by approximate computing techniques, and most notably reduced-precision arithmetic with coarsely quantized numerical representations. In this context, Bonseyes comes in as an initiative to enable stakeholders to bring AI to low-power and autonomous environments such as: Automotive, Medical Healthcare and Consumer Electronics. To achieve this, we introduce LPDNN, a framework for optimized deployment of Deep Neural Networks on heterogeneous embedded devices. In this work, we detail the quantization engine that is integrated in LPDNN. The engine depends on a fine-grained workflow which enables a Neural Network Design Exploration and a sensitivity analysis of each layer for quantization. We demonstrate the engine with a case study on Alexnet and VGG16 for three different techniques for direct quantization: standard fixed-point, dynamic fixed-point and k-means clustering, and demonstrate the potential of the latter. We argue that using a Gaussian quantizer with k-means clustering can achieve better performance than linear quantizers. Without retraining, we achieve over 55.64% saving for weights' storage and 69.17% for run-time memory accesses with less than 1% drop in top5 accuracy in Imagenet.

READ FULL TEXT
research
05/27/2021

Quantization and Deployment of Deep Neural Networks on Microcontrollers

Embedding Artificial Intelligence onto low-power devices is a challengin...
research
05/22/2018

Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit

Performing the inference step of deep learning in resource constrained e...
research
11/19/2019

IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision

Deploying deep models on embedded devices has been a challenging problem...
research
11/19/2019

CoopNet: Cooperative Convolutional Neural Network for Low-Power MCUs

Fixed-point quantization and binarization are two reduction methods adop...
research
08/31/2021

Quantized convolutional neural networks through the lens of partial differential equations

Quantization of Convolutional Neural Networks (CNNs) is a common approac...
research
11/30/2020

FactorizeNet: Progressive Depth Factorization for Efficient Network Architecture Exploration Under Quantization Constraints

Depth factorization and quantization have emerged as two of the principa...
research
10/01/2021

PhiNets: a scalable backbone for low-power AI at the edge

In the Internet of Things era, where we see many interconnected and hete...

Please sign up or login with your details

Forgot password? Click here to reset