-
Low-Power Object Counting with Hierarchical Neural Networks
Deep Neural Networks (DNNs) can achieve state-of-the-art accuracy in man...
read it
-
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent techni...
read it
-
Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Deep neural networks (DNNs) have been extremely successful in solving ma...
read it
-
Low-Power Computer Vision: Status, Challenges, Opportunities
Computer vision has achieved impressive progress in recent years. Meanwh...
read it
-
MERIT: Tensor Transform for Memory-Efficient Vision Processing on Parallel Architectures
Computationally intensive deep neural networks (DNNs) are well-suited to...
read it
-
Learning Efficient Convolutional Networks through Irregular Convolutional Kernels
As deep neural networks are increasingly used in applications suited for...
read it
-
2018 Low-Power Image Recognition Challenge
The Low-Power Image Recognition Challenge (LPIRC, https://rebootingcompu...
read it
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
READ FULL TEXT
Comments
There are no comments yet.