FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis

09/17/2018
by   Alexander Wong, et al.
0

The tremendous potential exhibited by deep learning is often offset by architectural and computational complexity, making widespread deployment a challenge for edge scenarios such as mobile and other consumer devices. To tackle this challenge, we explore the following idea: Can we learn generative machines to automatically generate deep neural networks with efficient network architectures? In this study, we introduce the idea of generative synthesis, which is premised on the intricate interplay between a generator-inquisitor pair that work in tandem to garner insights and learn to generate highly efficient deep neural networks that best satisfies operational requirements. What is most interesting is that, once a generator has been learned through generative synthesis, it can be used to generate not just one but a large variety of different, unique highly efficient deep neural networks that satisfy operational requirements. Experimental results for image classification, semantic segmentation, and object detection tasks illustrate the efficacy of generative synthesis in producing generators that automatically generate highly efficient deep neural networks (which we nickname FermiNets) with higher model efficiency and lower computational costs (reaching >10x more efficient and fewer multiply-accumulate operations than several tested state-of-the-art networks), as well as higher energy efficiency (reaching >4x improvements in image inferences per joule consumed on a Nvidia Tegra X2 mobile processor). As such, generative synthesis can be a powerful, generalized approach for accelerating and improving the building of deep neural networks for on-device edge scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

AttoNets: Compact and Efficient Deep Neural Networks for the Edge via Human-Machine Collaborative Design

While deep neural networks have achieved state-of-the-art performance ac...
research
11/20/2017

SquishedNets: Squishing SqueezeNet further for edge device scenarios via deep evolutionary synthesis

While deep neural networks have been shown in recent years to outperform...
research
08/10/2020

TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices

Advances in deep learning have led to state-of-the-art performance acros...
research
01/16/2018

StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks

The computational complexity of leveraging deep neural networks for extr...
research
10/15/2019

State of Compact Architecture Search For Deep Neural Networks

The design of compact deep neural networks is a crucial task to enable w...
research
10/18/2018

EdgeSpeechNets: Highly Efficient Deep Neural Networks for Speech Recognition on the Edge

Despite showing state-of-the-art performance, deep learning for speech r...
research
11/19/2018

Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks

Evolutionary deep intelligence has recently shown great promise for prod...

Please sign up or login with your details

Forgot password? Click here to reset