TUTOR: Training Neural Networks Using Decision Rules as Model Priors

10/12/2020
by   Shayan Hassantabar, et al.
17

The human brain has the ability to carry out new tasks with limited experience. It utilizes prior learning experiences to adapt the solution strategy to new domains. On the other hand, deep neural networks (DNNs) generally need large amounts of data and computational resources for training. However, this requirement is not met in many settings. To address these challenges, we propose the TUTOR DNN synthesis framework. TUTOR targets non-image datasets. It synthesizes accurate DNN models with limited available data, and reduced memory and computational requirements. It consists of three sequential steps: (1) drawing synthetic data from the same probability distribution as the training data and labeling the synthetic data based on a set of rules extracted from the real dataset, (2) use of two training schemes that combine synthetic data and training data to learn DNN weights, and (3) employing a grow-and-prune synthesis paradigm to learn both the weights and the architecture of the DNN to reduce model size while ensuring its accuracy. We show that in comparison with fully-connected DNNs, on an average TUTOR reduces the need for data by 6.0x (geometric mean), improves accuracy by 3.6 reduces the number of parameters (floating-point operations) by 4.7x (4.3x) (geometric mean). Thus, TUTOR is a less data-hungry, accurate, and efficient DNN synthesis framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2021

MHDeep: Mental Health Disorder Detection System based on Body-Area and Deep Neural Networks

Mental health problems impact quality of life of millions of people arou...
research
09/06/2019

Training Deep Neural Networks Using Posit Number System

With the increasing size of Deep Neural Network (DNN) models, the high m...
research
07/01/2017

Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations

Deep neural networks (DNNs) usually demand a large amount of operations ...
research
09/28/2021

slimTrain – A Stochastic Approximation Method for Training Separable Deep Neural Networks

Deep neural networks (DNNs) have shown their success as high-dimensional...
research
02/12/2018

ClosNets: a Priori Sparse Topologies for Faster DNN Training

Fully-connected layers in deep neural networks (DNN) are often the throu...
research
05/27/2019

Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

Deep neural networks (DNNs) have become a widely deployed model for nume...
research
06/06/2020

Guarded Deep Learning using Scenario-Based Modeling

Deep neural networks (DNNs) are becoming prevalent, often outperforming ...

Please sign up or login with your details

Forgot password? Click here to reset