SCANN: Synthesis of Compact and Accurate Neural Networks

04/19/2019
by   Shayan Hassantabar, et al.
0

Artificial neural networks (ANNs) have become the driving force behind recent artificial intelligence (AI) research. An important problem with implementing a neural network is the design of its architecture. Typically, such an architecture is obtained manually by exploring its hyperparameter space and kept fixed during training. This approach is both time-consuming and inefficient. Furthermore, modern neural networks often contain millions of parameters, whereas many applications require small inference models. Also, while ANNs have found great success in big-data applications, there is also significant interest in using ANNs for medium- and small-data applications that can be run on energy-constrained edge devices. To address these challenges, we propose a neural network synthesis methodology (SCANN) that can generate very compact neural networks without loss in accuracy for small and medium-size datasets. We also use dimensionality reduction methods to reduce the feature size of the datasets, so as to alleviate the curse of dimensionality. Our final synthesis methodology consists of three steps: dataset dimensionality reduction, neural network compression in each layer, and neural network compression with SCANN. We evaluate SCANN on the medium-size MNIST dataset by comparing our synthesized neural networks to the well-known LeNet-5 baseline. Without any loss in accuracy, SCANN generates a 46.3× smaller network than the LeNet-5 Caffe model. We also evaluate the efficiency of using dimensionality reduction alongside SCANN on nine small to medium-size datasets. Using this methodology enables us to reduce the number of connections in the network by up to 5078.7× (geometric mean: 82.1×), with little to no drop in accuracy. We also show that our synthesis methodology yields neural networks that are much better at navigating the accuracy vs. energy efficiency space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

A Dimensionality Reduction Approach for Convolutional Neural Networks

The focus of this paper is the application of classical model order redu...
research
09/04/2023

Neural network-based emulation of interstellar medium models

The interpretation of observations of atomic and molecular tracers in th...
research
09/07/2017

The Mating Rituals of Deep Neural Networks: Learning Compact Feature Representations through Sexual Evolutionary Synthesis

Evolutionary deep intelligence was recently proposed as a method for ach...
research
01/30/2019

Hardware-Guided Symbiotic Training for Compact, Accurate, yet Execution-Efficient LSTM

Many long short-term memory (LSTM) applications need fast yet compact mo...
research
02/09/2018

Nature vs. Nurture: The Role of Environmental Resources in Evolutionary Deep Intelligence

Evolutionary deep intelligence synthesizes highly efficient deep neural ...
research
07/26/2018

Premise selection with neural networks and distributed representation of features

We present the problem of selecting relevant premises for a proof of a g...
research
05/24/2023

Generative AI for Bayesian Computation

Generative AI (Gen-AI) methods are developed for Bayesian Computation. G...

Please sign up or login with your details

Forgot password? Click here to reset