SquishedNets: Squishing SqueezeNet further for edge device scenarios via deep evolutionary synthesis

11/20/2017
by   Mohammad Javad Shafiee, et al.
0

While deep neural networks have been shown in recent years to outperform other machine learning methods in a wide range of applications, one of the biggest challenges with enabling deep neural networks for widespread deployment on edge devices such as mobile and other consumer devices is high computational and memory requirements. Recently, there has been greater exploration into small deep neural network architectures that are more suitable for edge devices, with one of the most popular architectures being SqueezeNet, with an incredibly small model size of 4.8MB. Taking further advantage of the notion that many applications of machine learning on edge devices are often characterized by a low number of target classes, this study explores the utility of combining architectural modifications and an evolutionary synthesis strategy for synthesizing even smaller deep neural architectures based on the more recent SqueezeNet v1.1 macroarchitecture for applications with fewer target classes. In particular, architectural modifications are first made to SqueezeNet v1.1 to accommodate for a 10-class ImageNet-10 dataset, and then an evolutionary synthesis strategy is leveraged to synthesize more efficient deep neural networks based on this modified macroarchitecture. The resulting SquishedNets possess model sizes ranging from 2.4MB to 0.95MB ( 5.17X smaller than SqueezeNet v1.1, or 253X smaller than AlexNet). Furthermore, the SquishedNets are still able to achieve accuracies ranging from 81.2 and able to process at speeds of 156 images/sec to as much as 256 images/sec on a Nvidia Jetson TX1 embedded chip. These preliminary results show that a combination of architectural modifications and an evolutionary synthesis strategy can be a useful tool for producing very small deep neural network architectures that are well-suited for edge device scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2019

Assessing Architectural Similarity in Populations of Deep Neural Networks

Evolutionary deep intelligence has recently shown great promise for prod...
research
11/19/2018

Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks

Evolutionary deep intelligence has recently shown great promise for prod...
research
09/17/2018

FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis

The tremendous potential exhibited by deep learning is often offset by a...
research
10/18/2018

EdgeSpeechNets: Highly Efficient Deep Neural Networks for Speech Recognition on the Edge

Despite showing state-of-the-art performance, deep learning for speech r...
research
01/16/2018

StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks

The computational complexity of leveraging deep neural networks for extr...
research
06/29/2020

EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design for Real-time Facial Expression Recognition

While recent advances in deep learning have led to significant improveme...
research
05/20/2019

Enabling Computer Vision Driven Assistive Devices for the Visually Impaired via Micro-architecture Design Exploration

Recent improvements in object detection have shown potential to aid in t...

Please sign up or login with your details

Forgot password? Click here to reset