Optimizing Neural Network for Computer Vision task in Edge Device

10/02/2021
by   Ranjith M S, et al.
0

The field of computer vision has grown very rapidly in the past few years due to networks like convolution neural networks and their variants. The memory required to store the model and computational expense are very high for such a network limiting it to deploy on the edge device. Many times, applications rely on the cloud but that makes it hard for working in real-time due to round-trip delays. We overcome these problems by deploying the neural network on the edge device itself. The computational expense for edge devices is reduced by reducing the floating-point precision of the parameters in the model. After this the memory required for the model decreases and the speed of the computation increases where the performance of the model is least affected. This makes an edge device to predict from the neural network all by itself.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro